Jul 7 00:09:12.107921 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 00:09:12.107954 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:12.107966 kernel: BIOS-provided physical RAM map: Jul 7 00:09:12.107974 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 00:09:12.107981 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 00:09:12.107989 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 00:09:12.107998 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jul 7 00:09:12.108006 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jul 7 00:09:12.108016 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 00:09:12.108024 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 00:09:12.108032 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:09:12.108040 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 00:09:12.108046 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:09:12.108052 kernel: NX (Execute Disable) protection: active Jul 7 00:09:12.108061 kernel: APIC: Static calls initialized Jul 7 00:09:12.108067 kernel: SMBIOS 3.0.0 present. Jul 7 00:09:12.108074 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jul 7 00:09:12.108080 kernel: Hypervisor detected: KVM Jul 7 00:09:12.108086 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:09:12.108092 kernel: kvm-clock: using sched offset of 3520420081 cycles Jul 7 00:09:12.108099 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:09:12.108105 kernel: tsc: Detected 2495.312 MHz processor Jul 7 00:09:12.108112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:09:12.108120 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:09:12.108126 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jul 7 00:09:12.108133 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 00:09:12.108139 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:09:12.108145 kernel: Using GB pages for direct mapping Jul 7 00:09:12.108151 kernel: ACPI: Early table checksum verification disabled Jul 7 00:09:12.108159 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Jul 7 00:09:12.108168 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:12.108177 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:12.108188 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:12.108194 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jul 7 00:09:12.108200 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:12.108207 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:12.108213 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:12.108220 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:09:12.108226 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jul 7 00:09:12.108232 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jul 7 00:09:12.108243 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jul 7 00:09:12.108250 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jul 7 00:09:12.108256 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jul 7 00:09:12.108263 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jul 7 00:09:12.108269 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jul 7 00:09:12.108276 kernel: No NUMA configuration found Jul 7 00:09:12.108282 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jul 7 00:09:12.108290 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jul 7 00:09:12.108297 kernel: Zone ranges: Jul 7 00:09:12.108304 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:09:12.108311 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jul 7 00:09:12.108317 kernel: Normal empty Jul 7 00:09:12.108324 kernel: Movable zone start for each node Jul 7 00:09:12.108330 kernel: Early memory node ranges Jul 7 00:09:12.108337 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 00:09:12.108343 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jul 7 00:09:12.108351 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jul 7 00:09:12.108358 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:09:12.108364 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 00:09:12.108371 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 7 00:09:12.108377 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:09:12.108383 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:09:12.108390 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:09:12.108396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:09:12.108403 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:09:12.108411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:09:12.108417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:09:12.108424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:09:12.108430 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:09:12.108437 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:09:12.108443 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 7 00:09:12.108450 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:09:12.108456 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 00:09:12.108463 kernel: Booting paravirtualized kernel on KVM Jul 7 00:09:12.108471 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:09:12.108478 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:09:12.108485 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 7 00:09:12.108491 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 7 00:09:12.108498 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:09:12.108519 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 7 00:09:12.108528 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:12.108535 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:09:12.108566 kernel: random: crng init done Jul 7 00:09:12.108573 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:09:12.108580 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:09:12.108587 kernel: Fallback order for Node 0: 0 Jul 7 00:09:12.108593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jul 7 00:09:12.108600 kernel: Policy zone: DMA32 Jul 7 00:09:12.108606 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:09:12.108613 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 125152K reserved, 0K cma-reserved) Jul 7 00:09:12.108620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:09:12.108628 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 00:09:12.108634 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 00:09:12.108641 kernel: Dynamic Preempt: voluntary Jul 7 00:09:12.108647 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:09:12.108655 kernel: rcu: RCU event tracing is enabled. Jul 7 00:09:12.108661 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:09:12.108668 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:09:12.108675 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:09:12.108681 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:09:12.108689 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:09:12.108696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:09:12.108703 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:09:12.108709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:09:12.108716 kernel: Console: colour VGA+ 80x25 Jul 7 00:09:12.108722 kernel: printk: console [tty0] enabled Jul 7 00:09:12.108731 kernel: printk: console [ttyS0] enabled Jul 7 00:09:12.108738 kernel: ACPI: Core revision 20230628 Jul 7 00:09:12.108744 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 00:09:12.108751 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:09:12.108759 kernel: x2apic enabled Jul 7 00:09:12.108766 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:09:12.108772 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:09:12.108779 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 7 00:09:12.108786 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Jul 7 00:09:12.108792 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 00:09:12.108799 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 00:09:12.108806 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 00:09:12.108818 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:09:12.108825 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:09:12.108832 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:09:12.108840 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 00:09:12.108847 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 00:09:12.108856 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:09:12.108867 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:09:12.108881 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:09:12.108890 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:09:12.108902 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:09:12.108911 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:09:12.108920 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 00:09:12.108928 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:09:12.108938 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:09:12.108947 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 00:09:12.108956 kernel: landlock: Up and running. Jul 7 00:09:12.108964 kernel: SELinux: Initializing. Jul 7 00:09:12.108975 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:09:12.108983 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:09:12.108992 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 00:09:12.109000 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:09:12.109009 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:09:12.109018 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:09:12.109026 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 00:09:12.109035 kernel: ... version: 0 Jul 7 00:09:12.109044 kernel: ... bit width: 48 Jul 7 00:09:12.109054 kernel: ... generic registers: 6 Jul 7 00:09:12.109063 kernel: ... value mask: 0000ffffffffffff Jul 7 00:09:12.109072 kernel: ... max period: 00007fffffffffff Jul 7 00:09:12.109080 kernel: ... fixed-purpose events: 0 Jul 7 00:09:12.109089 kernel: ... event mask: 000000000000003f Jul 7 00:09:12.109098 kernel: signal: max sigframe size: 1776 Jul 7 00:09:12.109106 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:09:12.109116 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:09:12.109125 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:09:12.109136 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:09:12.109146 kernel: .... node #0, CPUs: #1 Jul 7 00:09:12.109155 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:09:12.109164 kernel: smpboot: Max logical packages: 1 Jul 7 00:09:12.109173 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Jul 7 00:09:12.109182 kernel: devtmpfs: initialized Jul 7 00:09:12.109191 kernel: x86/mm: Memory block size: 128MB Jul 7 00:09:12.109200 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:09:12.109209 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:09:12.109221 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:09:12.109231 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:09:12.109240 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:09:12.109249 kernel: audit: type=2000 audit(1751846950.956:1): state=initialized audit_enabled=0 res=1 Jul 7 00:09:12.109256 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:09:12.109263 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:09:12.109270 kernel: cpuidle: using governor menu Jul 7 00:09:12.109277 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:09:12.109284 kernel: dca service started, version 1.12.1 Jul 7 00:09:12.109293 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 7 00:09:12.109300 kernel: PCI: Using configuration type 1 for base access Jul 7 00:09:12.109310 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:09:12.109320 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:09:12.109329 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:09:12.109337 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:09:12.109346 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:09:12.109354 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:09:12.109363 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:09:12.109374 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:09:12.109383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:09:12.109392 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 00:09:12.109401 kernel: ACPI: Interpreter enabled Jul 7 00:09:12.109410 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:09:12.109419 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:09:12.109428 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:09:12.109438 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:09:12.109447 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 00:09:12.109458 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:09:12.112685 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:09:12.112776 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 00:09:12.112864 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 00:09:12.112877 kernel: PCI host bridge to bus 0000:00 Jul 7 00:09:12.112975 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:09:12.113042 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:09:12.113111 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:09:12.113176 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jul 7 00:09:12.113242 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 00:09:12.113312 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 7 00:09:12.113375 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:09:12.113465 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 7 00:09:12.113598 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jul 7 00:09:12.113675 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jul 7 00:09:12.113749 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jul 7 00:09:12.113824 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jul 7 00:09:12.113898 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jul 7 00:09:12.113974 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:09:12.114064 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.114145 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jul 7 00:09:12.114234 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.114309 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jul 7 00:09:12.114403 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.114479 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jul 7 00:09:12.115693 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.115785 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jul 7 00:09:12.115872 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.115950 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jul 7 00:09:12.116035 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.116145 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jul 7 00:09:12.116269 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.116380 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jul 7 00:09:12.116521 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.116665 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jul 7 00:09:12.116789 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 7 00:09:12.116899 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jul 7 00:09:12.117016 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 7 00:09:12.117134 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 00:09:12.117272 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 7 00:09:12.117384 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jul 7 00:09:12.117482 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jul 7 00:09:12.120652 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 7 00:09:12.120741 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 7 00:09:12.120834 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 7 00:09:12.120956 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jul 7 00:09:12.121036 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 7 00:09:12.121111 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jul 7 00:09:12.121214 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 7 00:09:12.121296 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 00:09:12.121370 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 7 00:09:12.121459 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 7 00:09:12.121584 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jul 7 00:09:12.121664 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 7 00:09:12.121737 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 00:09:12.121833 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 00:09:12.121943 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 7 00:09:12.122021 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jul 7 00:09:12.122111 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jul 7 00:09:12.122195 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 7 00:09:12.122273 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 00:09:12.122345 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 00:09:12.122432 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 7 00:09:12.122531 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 7 00:09:12.122659 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 7 00:09:12.122741 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 00:09:12.122814 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 00:09:12.122901 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 7 00:09:12.122979 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jul 7 00:09:12.123060 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jul 7 00:09:12.123134 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 7 00:09:12.123224 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 00:09:12.123305 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 00:09:12.123399 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 7 00:09:12.123477 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jul 7 00:09:12.123600 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jul 7 00:09:12.123678 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 7 00:09:12.123750 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 00:09:12.123822 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 00:09:12.123832 kernel: acpiphp: Slot [0] registered Jul 7 00:09:12.123956 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 7 00:09:12.124075 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jul 7 00:09:12.124188 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jul 7 00:09:12.124304 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jul 7 00:09:12.124417 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 7 00:09:12.124593 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 00:09:12.124708 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 00:09:12.124725 kernel: acpiphp: Slot [0-2] registered Jul 7 00:09:12.124829 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 7 00:09:12.124939 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jul 7 00:09:12.125047 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 00:09:12.125066 kernel: acpiphp: Slot [0-3] registered Jul 7 00:09:12.125178 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 7 00:09:12.125288 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 00:09:12.125396 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 00:09:12.125413 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:09:12.125429 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:09:12.125440 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:09:12.125450 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:09:12.125460 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 00:09:12.125471 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 00:09:12.125481 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 00:09:12.125491 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 00:09:12.125502 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 00:09:12.125529 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 00:09:12.125558 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 00:09:12.125569 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 00:09:12.125580 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 00:09:12.125590 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 00:09:12.125600 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 00:09:12.125611 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 00:09:12.125622 kernel: iommu: Default domain type: Translated Jul 7 00:09:12.125633 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:09:12.125643 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:09:12.125658 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:09:12.125669 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 00:09:12.125680 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jul 7 00:09:12.125801 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 00:09:12.125908 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 00:09:12.125996 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:09:12.126013 kernel: vgaarb: loaded Jul 7 00:09:12.126021 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 00:09:12.126028 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 00:09:12.126039 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:09:12.126047 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:09:12.126054 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:09:12.126062 kernel: pnp: PnP ACPI init Jul 7 00:09:12.126166 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 00:09:12.126182 kernel: pnp: PnP ACPI: found 5 devices Jul 7 00:09:12.126192 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:09:12.126202 kernel: NET: Registered PF_INET protocol family Jul 7 00:09:12.126215 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:09:12.126224 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 00:09:12.126233 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:09:12.126243 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:09:12.126252 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:09:12.126262 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 00:09:12.126270 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:09:12.126277 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:09:12.126285 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:09:12.126294 kernel: NET: Registered PF_XDP protocol family Jul 7 00:09:12.126377 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 00:09:12.126454 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 00:09:12.126595 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 00:09:12.126675 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jul 7 00:09:12.126748 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jul 7 00:09:12.126821 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jul 7 00:09:12.126898 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 7 00:09:12.126971 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 00:09:12.127044 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 7 00:09:12.127115 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 7 00:09:12.127199 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 00:09:12.127276 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 00:09:12.127350 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 7 00:09:12.127423 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 00:09:12.127499 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 00:09:12.127632 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 7 00:09:12.127705 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 00:09:12.127777 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 00:09:12.127847 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 7 00:09:12.127920 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 00:09:12.127992 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 00:09:12.128066 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 7 00:09:12.128153 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 00:09:12.128238 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 00:09:12.128313 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 7 00:09:12.128385 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jul 7 00:09:12.128459 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 00:09:12.128613 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 00:09:12.128691 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 7 00:09:12.128764 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jul 7 00:09:12.128838 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jul 7 00:09:12.128920 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 00:09:12.129044 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 7 00:09:12.129152 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jul 7 00:09:12.129260 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 00:09:12.129374 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 00:09:12.129484 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:09:12.129650 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:09:12.129747 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:09:12.129839 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jul 7 00:09:12.129906 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 00:09:12.129976 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 7 00:09:12.130055 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 7 00:09:12.130123 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 7 00:09:12.130217 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 7 00:09:12.130296 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 00:09:12.130373 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 7 00:09:12.130442 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 00:09:12.130557 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 7 00:09:12.130631 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 00:09:12.130707 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 7 00:09:12.130775 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 00:09:12.130853 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jul 7 00:09:12.130922 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 00:09:12.131012 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jul 7 00:09:12.131081 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 7 00:09:12.131149 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 00:09:12.131239 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jul 7 00:09:12.131310 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jul 7 00:09:12.131377 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 00:09:12.131461 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jul 7 00:09:12.131582 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 7 00:09:12.131654 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 00:09:12.131666 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 00:09:12.131674 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:09:12.131682 kernel: Initialise system trusted keyrings Jul 7 00:09:12.131689 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 00:09:12.131697 kernel: Key type asymmetric registered Jul 7 00:09:12.131705 kernel: Asymmetric key parser 'x509' registered Jul 7 00:09:12.131715 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 00:09:12.131723 kernel: io scheduler mq-deadline registered Jul 7 00:09:12.131730 kernel: io scheduler kyber registered Jul 7 00:09:12.131738 kernel: io scheduler bfq registered Jul 7 00:09:12.131834 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 7 00:09:12.131946 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 7 00:09:12.132054 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 7 00:09:12.132163 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 7 00:09:12.132272 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 7 00:09:12.132385 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 7 00:09:12.132489 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 7 00:09:12.132696 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 7 00:09:12.132807 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 7 00:09:12.132913 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 7 00:09:12.133020 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 7 00:09:12.133099 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 7 00:09:12.133179 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 7 00:09:12.133263 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 7 00:09:12.133363 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 7 00:09:12.133473 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 7 00:09:12.133490 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 00:09:12.133622 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jul 7 00:09:12.133729 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jul 7 00:09:12.133746 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:09:12.133758 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jul 7 00:09:12.133774 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:09:12.133785 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:09:12.133796 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:09:12.133808 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:09:12.133818 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:09:12.133830 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:09:12.133945 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 00:09:12.134049 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 00:09:12.134151 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T00:09:11 UTC (1751846951) Jul 7 00:09:12.134239 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 00:09:12.134253 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 00:09:12.134264 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:09:12.134274 kernel: Segment Routing with IPv6 Jul 7 00:09:12.134284 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:09:12.134297 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:09:12.134306 kernel: Key type dns_resolver registered Jul 7 00:09:12.134316 kernel: IPI shorthand broadcast: enabled Jul 7 00:09:12.134328 kernel: sched_clock: Marking stable (1542009823, 148044483)->(1705533243, -15478937) Jul 7 00:09:12.134339 kernel: registered taskstats version 1 Jul 7 00:09:12.134349 kernel: Loading compiled-in X.509 certificates Jul 7 00:09:12.134360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 00:09:12.134370 kernel: Key type .fscrypt registered Jul 7 00:09:12.134380 kernel: Key type fscrypt-provisioning registered Jul 7 00:09:12.134389 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:09:12.134399 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:09:12.134412 kernel: ima: No architecture policies found Jul 7 00:09:12.134421 kernel: clk: Disabling unused clocks Jul 7 00:09:12.134429 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 00:09:12.134436 kernel: Write protecting the kernel read-only data: 36864k Jul 7 00:09:12.134444 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 00:09:12.134451 kernel: Run /init as init process Jul 7 00:09:12.134458 kernel: with arguments: Jul 7 00:09:12.134466 kernel: /init Jul 7 00:09:12.134473 kernel: with environment: Jul 7 00:09:12.134481 kernel: HOME=/ Jul 7 00:09:12.134490 kernel: TERM=linux Jul 7 00:09:12.134497 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:09:12.134522 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:09:12.134533 systemd[1]: Detected virtualization kvm. Jul 7 00:09:12.134567 systemd[1]: Detected architecture x86-64. Jul 7 00:09:12.134575 systemd[1]: Running in initrd. Jul 7 00:09:12.134582 systemd[1]: No hostname configured, using default hostname. Jul 7 00:09:12.134593 systemd[1]: Hostname set to . Jul 7 00:09:12.134601 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:09:12.134609 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:09:12.134617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:12.134625 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:12.134634 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:09:12.134642 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:09:12.134650 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:09:12.134659 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:09:12.134669 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:09:12.134677 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:09:12.134685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:12.134693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:12.134701 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:09:12.134709 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:09:12.134718 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:09:12.134726 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:09:12.134734 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:09:12.134742 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:09:12.134750 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:09:12.134758 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:09:12.134766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:12.134774 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:12.134784 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:12.134792 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:09:12.134800 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:09:12.134808 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:09:12.134817 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:09:12.134825 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:09:12.134833 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:09:12.134841 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:09:12.134849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:12.134859 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:09:12.134867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:12.134875 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:09:12.134884 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:09:12.134894 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:09:12.134931 systemd-journald[187]: Collecting audit messages is disabled. Jul 7 00:09:12.134952 systemd-journald[187]: Journal started Jul 7 00:09:12.134975 systemd-journald[187]: Runtime Journal (/run/log/journal/c42475a7061a4ac5a73b5ee76b231864) is 4.8M, max 38.4M, 33.6M free. Jul 7 00:09:12.111268 systemd-modules-load[188]: Inserted module 'overlay' Jul 7 00:09:12.140278 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:09:12.139835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:12.148893 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:12.152696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:09:12.163627 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:09:12.160429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:09:12.171569 kernel: Bridge firewalling registered Jul 7 00:09:12.171662 systemd-modules-load[188]: Inserted module 'br_netfilter' Jul 7 00:09:12.181759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:12.182468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:12.184556 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:12.190714 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:09:12.194709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:09:12.196275 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:12.201036 dracut-cmdline[216]: dracut-dracut-053 Jul 7 00:09:12.203342 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:12.214277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:12.222170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:09:12.262359 systemd-resolved[251]: Positive Trust Anchors: Jul 7 00:09:12.262384 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:09:12.262431 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:09:12.277810 kernel: SCSI subsystem initialized Jul 7 00:09:12.273864 systemd-resolved[251]: Defaulting to hostname 'linux'. Jul 7 00:09:12.275200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:09:12.277064 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:12.285583 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:09:12.295575 kernel: iscsi: registered transport (tcp) Jul 7 00:09:12.314601 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:09:12.314895 kernel: QLogic iSCSI HBA Driver Jul 7 00:09:12.343298 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:09:12.348713 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:09:12.370679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:09:12.370800 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:09:12.372112 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 00:09:12.414624 kernel: raid6: avx2x4 gen() 28261 MB/s Jul 7 00:09:12.431618 kernel: raid6: avx2x2 gen() 29629 MB/s Jul 7 00:09:12.448616 kernel: raid6: avx2x1 gen() 23789 MB/s Jul 7 00:09:12.448729 kernel: raid6: using algorithm avx2x2 gen() 29629 MB/s Jul 7 00:09:12.468803 kernel: raid6: .... xor() 18262 MB/s, rmw enabled Jul 7 00:09:12.468955 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:09:12.489631 kernel: xor: automatically using best checksumming function avx Jul 7 00:09:12.649637 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:09:12.667499 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:09:12.674841 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:12.701842 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jul 7 00:09:12.705856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:12.715849 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:09:12.731312 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jul 7 00:09:12.760273 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:09:12.766677 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:09:12.830462 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:12.841262 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:09:12.892896 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:09:12.895159 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:09:12.896917 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:12.898146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:09:12.905725 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:09:12.921582 kernel: scsi host0: Virtio SCSI HBA Jul 7 00:09:12.931724 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:09:12.940667 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 7 00:09:12.948570 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:09:12.958292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:09:12.960293 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:12.963991 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:12.974253 kernel: ACPI: bus type USB registered Jul 7 00:09:12.974280 kernel: usbcore: registered new interface driver usbfs Jul 7 00:09:12.974291 kernel: usbcore: registered new interface driver hub Jul 7 00:09:12.974299 kernel: usbcore: registered new device driver usb Jul 7 00:09:12.964442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:12.964591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:12.971691 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:12.979323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:12.985591 kernel: AVX2 version of gcm_enc/dec engaged. Jul 7 00:09:13.018570 kernel: AES CTR mode by8 optimization enabled Jul 7 00:09:13.053597 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 7 00:09:13.053849 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 7 00:09:13.056582 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 7 00:09:13.057587 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 7 00:09:13.057715 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 7 00:09:13.057807 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 7 00:09:13.057898 kernel: hub 1-0:1.0: USB hub found Jul 7 00:09:13.058007 kernel: hub 1-0:1.0: 4 ports detected Jul 7 00:09:13.058582 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 00:09:13.058711 kernel: hub 2-0:1.0: USB hub found Jul 7 00:09:13.058812 kernel: hub 2-0:1.0: 4 ports detected Jul 7 00:09:13.066565 kernel: libata version 3.00 loaded. Jul 7 00:09:13.092411 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 00:09:13.104902 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 00:09:13.104927 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 7 00:09:13.107788 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 7 00:09:13.111333 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 7 00:09:13.111469 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 00:09:13.111622 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 00:09:13.111719 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 7 00:09:13.111816 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:09:13.111909 kernel: scsi host1: ahci Jul 7 00:09:13.107619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:13.118267 kernel: scsi host2: ahci Jul 7 00:09:13.118421 kernel: scsi host3: ahci Jul 7 00:09:13.118533 kernel: scsi host4: ahci Jul 7 00:09:13.118646 kernel: scsi host5: ahci Jul 7 00:09:13.119218 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:09:13.119253 kernel: GPT:17805311 != 80003071 Jul 7 00:09:13.119264 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:09:13.119276 kernel: GPT:17805311 != 80003071 Jul 7 00:09:13.119285 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:09:13.119302 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:13.120572 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 00:09:13.123572 kernel: scsi host6: ahci Jul 7 00:09:13.127781 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Jul 7 00:09:13.127812 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Jul 7 00:09:13.127822 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Jul 7 00:09:13.127832 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Jul 7 00:09:13.127842 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Jul 7 00:09:13.127855 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Jul 7 00:09:13.146735 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:13.163166 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (455) Jul 7 00:09:13.179414 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 7 00:09:13.181408 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (456) Jul 7 00:09:13.183367 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:13.196003 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 7 00:09:13.201605 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 00:09:13.206537 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 7 00:09:13.211781 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 7 00:09:13.216678 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:09:13.231064 disk-uuid[573]: Primary Header is updated. Jul 7 00:09:13.231064 disk-uuid[573]: Secondary Entries is updated. Jul 7 00:09:13.231064 disk-uuid[573]: Secondary Header is updated. Jul 7 00:09:13.233011 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:13.301360 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 7 00:09:13.439614 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:13.448692 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 00:09:13.448848 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:13.448872 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:13.448895 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 00:09:13.454168 kernel: ata1.00: applying bridge limits Jul 7 00:09:13.454263 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:13.458217 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:13.458255 kernel: ata1.00: configured for UDMA/100 Jul 7 00:09:13.460567 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 00:09:13.460633 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:09:13.494641 kernel: usbcore: registered new interface driver usbhid Jul 7 00:09:13.494735 kernel: usbhid: USB HID core driver Jul 7 00:09:13.508010 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jul 7 00:09:13.508091 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 7 00:09:13.517991 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 00:09:13.518267 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:09:13.539769 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 7 00:09:14.250646 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:14.254166 disk-uuid[574]: The operation has completed successfully. Jul 7 00:09:14.324570 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:09:14.324696 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:09:14.364729 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:09:14.371697 sh[597]: Success Jul 7 00:09:14.394638 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 7 00:09:14.474417 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:09:14.486906 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:09:14.489911 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:09:14.519756 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 00:09:14.519866 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:14.523261 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 00:09:14.526843 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 00:09:14.529595 kernel: BTRFS info (device dm-0): using free space tree Jul 7 00:09:14.544582 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 00:09:14.548167 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:09:14.550177 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:09:14.559860 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:09:14.565745 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:09:14.594143 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:14.594239 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:14.594261 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:14.600001 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:14.600063 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:14.613629 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 00:09:14.618698 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:14.622838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:09:14.630985 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:09:14.734273 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:09:14.753208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:09:14.774645 systemd-networkd[778]: lo: Link UP Jul 7 00:09:14.775393 systemd-networkd[778]: lo: Gained carrier Jul 7 00:09:14.777933 systemd-networkd[778]: Enumeration completed Jul 7 00:09:14.778000 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:09:14.778538 systemd[1]: Reached target network.target - Network. Jul 7 00:09:14.779842 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:14.779845 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:14.787578 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:14.787583 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:14.788411 systemd-networkd[778]: eth0: Link UP Jul 7 00:09:14.788414 systemd-networkd[778]: eth0: Gained carrier Jul 7 00:09:14.788421 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:14.791719 systemd-networkd[778]: eth1: Link UP Jul 7 00:09:14.791722 systemd-networkd[778]: eth1: Gained carrier Jul 7 00:09:14.791728 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:14.804780 ignition[699]: Ignition 2.19.0 Jul 7 00:09:14.804795 ignition[699]: Stage: fetch-offline Jul 7 00:09:14.804841 ignition[699]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:14.804850 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:09:14.805655 ignition[699]: parsed url from cmdline: "" Jul 7 00:09:14.807282 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:09:14.805660 ignition[699]: no config URL provided Jul 7 00:09:14.805676 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:09:14.805692 ignition[699]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:09:14.805702 ignition[699]: failed to fetch config: resource requires networking Jul 7 00:09:14.805964 ignition[699]: Ignition finished successfully Jul 7 00:09:14.816706 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:09:14.831602 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:09:14.836575 ignition[785]: Ignition 2.19.0 Jul 7 00:09:14.836592 ignition[785]: Stage: fetch Jul 7 00:09:14.836784 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:14.836793 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:09:14.836891 ignition[785]: parsed url from cmdline: "" Jul 7 00:09:14.836894 ignition[785]: no config URL provided Jul 7 00:09:14.836898 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:09:14.836905 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:09:14.836927 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 7 00:09:14.838787 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 7 00:09:14.860591 systemd-networkd[778]: eth0: DHCPv4 address 157.180.40.234/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 7 00:09:15.040018 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 7 00:09:15.050741 ignition[785]: GET result: OK Jul 7 00:09:15.051963 ignition[785]: parsing config with SHA512: 215788385599320ba474572f3cc4493e1f8acef1098145767bb18b624c74a8c11516824780fa787b7fd6e058b69bcd5389774c302ec646312465eff4b3c87bd2 Jul 7 00:09:15.060747 unknown[785]: fetched base config from "system" Jul 7 00:09:15.060769 unknown[785]: fetched base config from "system" Jul 7 00:09:15.061730 ignition[785]: fetch: fetch complete Jul 7 00:09:15.060780 unknown[785]: fetched user config from "hetzner" Jul 7 00:09:15.061741 ignition[785]: fetch: fetch passed Jul 7 00:09:15.064953 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:09:15.061819 ignition[785]: Ignition finished successfully Jul 7 00:09:15.074003 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:09:15.101856 ignition[792]: Ignition 2.19.0 Jul 7 00:09:15.101879 ignition[792]: Stage: kargs Jul 7 00:09:15.102205 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:15.102222 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:09:15.108109 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:09:15.104042 ignition[792]: kargs: kargs passed Jul 7 00:09:15.104118 ignition[792]: Ignition finished successfully Jul 7 00:09:15.119902 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:09:15.186749 ignition[798]: Ignition 2.19.0 Jul 7 00:09:15.186768 ignition[798]: Stage: disks Jul 7 00:09:15.187038 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:15.190170 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:09:15.187053 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:09:15.198878 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:09:15.188446 ignition[798]: disks: disks passed Jul 7 00:09:15.201032 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:09:15.188536 ignition[798]: Ignition finished successfully Jul 7 00:09:15.203160 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:09:15.205168 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:09:15.206849 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:09:15.215781 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:09:15.238438 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 00:09:15.243096 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:09:15.251001 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:09:15.360583 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 00:09:15.363226 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:09:15.365640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:09:15.376705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:09:15.380668 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:09:15.383797 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:09:15.384614 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:09:15.384650 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:09:15.400064 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (815) Jul 7 00:09:15.405439 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:15.405598 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:15.405386 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:09:15.414747 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:15.423772 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:15.423853 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:15.428481 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:09:15.435349 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:09:15.511886 coreos-metadata[817]: Jul 07 00:09:15.511 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 7 00:09:15.521106 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:09:15.522142 coreos-metadata[817]: Jul 07 00:09:15.522 INFO Fetch successful Jul 7 00:09:15.522667 coreos-metadata[817]: Jul 07 00:09:15.522 INFO wrote hostname ci-4081-3-4-d-d476fda7c5 to /sysroot/etc/hostname Jul 7 00:09:15.526026 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:09:15.530586 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:09:15.536662 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:09:15.542561 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:09:15.652040 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:09:15.659670 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:09:15.661810 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:09:15.671341 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:09:15.672579 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:15.692495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:09:15.708657 ignition[933]: INFO : Ignition 2.19.0 Jul 7 00:09:15.708657 ignition[933]: INFO : Stage: mount Jul 7 00:09:15.710080 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:15.710080 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:09:15.711650 ignition[933]: INFO : mount: mount passed Jul 7 00:09:15.711650 ignition[933]: INFO : Ignition finished successfully Jul 7 00:09:15.712136 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:09:15.717640 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:09:15.724248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:09:15.746758 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jul 7 00:09:15.746847 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:15.746872 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:15.748289 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:15.759705 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:15.759755 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:15.764218 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:09:15.796440 ignition[961]: INFO : Ignition 2.19.0 Jul 7 00:09:15.796440 ignition[961]: INFO : Stage: files Jul 7 00:09:15.799108 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:15.799108 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:09:15.799108 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:09:15.803647 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:09:15.803647 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:09:15.803647 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:09:15.803647 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:09:15.810710 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:09:15.810710 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 00:09:15.810710 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 00:09:15.810710 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:09:15.810710 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 00:09:15.805535 unknown[961]: wrote ssh authorized keys file for user: core Jul 7 00:09:16.104582 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:09:16.286794 systemd-networkd[778]: eth1: Gained IPv6LL Jul 7 00:09:16.438848 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:09:16.438848 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:09:16.443384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 00:09:16.542987 systemd-networkd[778]: eth0: Gained IPv6LL Jul 7 00:09:17.253311 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:09:17.830673 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:09:17.830673 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:09:17.835180 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:09:17.835180 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:09:17.835180 ignition[961]: INFO : files: files passed Jul 7 00:09:17.835180 ignition[961]: INFO : Ignition finished successfully Jul 7 00:09:17.836253 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:09:17.845656 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:09:17.848826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:09:17.851994 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:09:17.852069 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:09:17.895024 initrd-setup-root-after-ignition[989]: grep: Jul 7 00:09:17.895785 initrd-setup-root-after-ignition[993]: grep: Jul 7 00:09:17.895785 initrd-setup-root-after-ignition[989]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:17.895785 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:17.897841 initrd-setup-root-after-ignition[993]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:17.899414 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:09:17.901663 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:09:17.907680 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:09:17.942969 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:09:17.943157 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:09:17.945628 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:09:17.946827 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:09:17.948691 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:09:17.953715 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:09:18.004290 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:09:18.013880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:09:18.039339 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:18.041403 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:18.044252 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:09:18.046820 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:09:18.047156 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:09:18.049853 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:09:18.051692 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:09:18.054396 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:09:18.056807 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:09:18.059151 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:09:18.061962 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:09:18.064662 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:09:18.067151 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:09:18.069446 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:09:18.071926 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:09:18.074170 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:09:18.074396 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:09:18.076998 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:18.078844 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:18.080958 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:09:18.082451 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:18.084136 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:09:18.084386 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:09:18.089172 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:09:18.089443 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:09:18.090610 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:09:18.090813 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:09:18.092085 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:09:18.092282 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:09:18.099866 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:09:18.101290 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:09:18.101575 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:18.108834 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:09:18.110488 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:09:18.111742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:18.120323 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:09:18.121481 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:09:18.134482 ignition[1013]: INFO : Ignition 2.19.0 Jul 7 00:09:18.134482 ignition[1013]: INFO : Stage: umount Jul 7 00:09:18.139799 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:18.139799 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:09:18.134922 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:09:18.149105 ignition[1013]: INFO : umount: umount passed Jul 7 00:09:18.149105 ignition[1013]: INFO : Ignition finished successfully Jul 7 00:09:18.135096 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:09:18.143909 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:09:18.144066 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:09:18.145413 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:09:18.145499 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:09:18.146982 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:09:18.147038 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:09:18.149831 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:09:18.149884 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:09:18.151307 systemd[1]: Stopped target network.target - Network. Jul 7 00:09:18.154442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:09:18.154506 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:09:18.155929 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:09:18.159827 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:09:18.165621 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:18.167218 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:09:18.167988 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:09:18.169240 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:09:18.169302 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:09:18.170604 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:09:18.170656 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:09:18.171889 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:09:18.171949 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:09:18.173236 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:09:18.173290 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:09:18.174798 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:09:18.176633 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:09:18.179619 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:09:18.179836 systemd-networkd[778]: eth0: DHCPv6 lease lost Jul 7 00:09:18.180396 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:09:18.180534 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:09:18.182830 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:09:18.182960 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:09:18.183675 systemd-networkd[778]: eth1: DHCPv6 lease lost Jul 7 00:09:18.186751 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:09:18.186892 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:09:18.190262 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:09:18.190427 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:09:18.193667 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:09:18.193714 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:18.198832 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:09:18.200044 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:09:18.200806 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:09:18.202231 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:09:18.203031 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:18.204237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:09:18.204284 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:18.205601 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:09:18.205641 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:18.207705 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:18.223347 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:09:18.223465 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:09:18.226188 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:09:18.226317 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:18.228061 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:09:18.228098 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:18.229431 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:09:18.229461 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:18.230913 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:09:18.230955 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:09:18.233196 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:09:18.233236 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:09:18.234646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:09:18.234681 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:18.245989 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:09:18.247895 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:09:18.247971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:18.248527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:18.248580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:18.255613 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:09:18.255765 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:09:18.258027 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:09:18.270862 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:09:18.279430 systemd[1]: Switching root. Jul 7 00:09:18.353573 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jul 7 00:09:18.353724 systemd-journald[187]: Journal stopped Jul 7 00:09:19.552043 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:09:19.552124 kernel: SELinux: policy capability open_perms=1 Jul 7 00:09:19.552147 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:09:19.552159 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:09:19.552168 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:09:19.552184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:09:19.552200 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:09:19.552209 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:09:19.552218 kernel: audit: type=1403 audit(1751846958.642:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:09:19.552229 systemd[1]: Successfully loaded SELinux policy in 64.673ms. Jul 7 00:09:19.552256 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.335ms. Jul 7 00:09:19.552269 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:09:19.552279 systemd[1]: Detected virtualization kvm. Jul 7 00:09:19.552289 systemd[1]: Detected architecture x86-64. Jul 7 00:09:19.552299 systemd[1]: Detected first boot. Jul 7 00:09:19.552308 systemd[1]: Hostname set to . Jul 7 00:09:19.552324 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:09:19.552333 zram_generator::config[1073]: No configuration found. Jul 7 00:09:19.552346 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:09:19.552357 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:09:19.552368 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 00:09:19.552378 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:09:19.552388 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:09:19.552398 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:09:19.552408 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:09:19.552418 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:09:19.552428 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:09:19.552438 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:09:19.552448 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:09:19.552458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:19.552468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:19.552478 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:09:19.552487 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:09:19.552497 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:09:19.552507 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:09:19.552527 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:09:19.552539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:19.553294 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:09:19.553305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:19.553316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:09:19.553326 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:09:19.553336 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:09:19.553346 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:09:19.553360 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:09:19.553370 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:09:19.553379 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:09:19.553390 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:19.553401 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:19.553411 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:19.553421 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:09:19.553432 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:09:19.553442 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:09:19.553454 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:09:19.553469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:19.553481 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:09:19.553490 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:09:19.553500 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:09:19.553521 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:09:19.553535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:19.553679 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:09:19.553692 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:09:19.553702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:19.553711 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:09:19.553721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:19.553731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:09:19.553741 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:19.553754 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:09:19.553764 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 00:09:19.553775 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 00:09:19.553784 kernel: fuse: init (API version 7.39) Jul 7 00:09:19.553794 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:09:19.553804 kernel: ACPI: bus type drm_connector registered Jul 7 00:09:19.553812 kernel: loop: module loaded Jul 7 00:09:19.553822 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:09:19.553832 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:09:19.553862 systemd-journald[1172]: Collecting audit messages is disabled. Jul 7 00:09:19.556435 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:09:19.556453 systemd-journald[1172]: Journal started Jul 7 00:09:19.556474 systemd-journald[1172]: Runtime Journal (/run/log/journal/c42475a7061a4ac5a73b5ee76b231864) is 4.8M, max 38.4M, 33.6M free. Jul 7 00:09:19.565564 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:09:19.570569 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:19.573651 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:09:19.574839 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:09:19.575590 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:09:19.576195 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:09:19.580537 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:09:19.581155 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:09:19.581772 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:09:19.582507 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:09:19.583324 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:19.584069 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:09:19.584269 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:09:19.585029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:19.585213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:19.585947 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:09:19.586138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:09:19.587237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:19.587429 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:19.588186 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:09:19.588322 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:09:19.588976 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:19.589113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:19.589999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:19.590884 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:09:19.591818 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:09:19.601940 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:09:19.608668 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:09:19.614926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:09:19.616694 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:09:19.627891 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:09:19.638823 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:09:19.643649 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:09:19.656749 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:09:19.658196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:09:19.660436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:09:19.678896 systemd-journald[1172]: Time spent on flushing to /var/log/journal/c42475a7061a4ac5a73b5ee76b231864 is 39.905ms for 1116 entries. Jul 7 00:09:19.678896 systemd-journald[1172]: System Journal (/var/log/journal/c42475a7061a4ac5a73b5ee76b231864) is 8.0M, max 584.8M, 576.8M free. Jul 7 00:09:19.730556 systemd-journald[1172]: Received client request to flush runtime journal. Jul 7 00:09:19.671782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:09:19.674832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:19.677766 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:09:19.678298 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:09:19.685155 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:09:19.690788 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:09:19.697214 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 00:09:19.715277 udevadm[1225]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 00:09:19.725677 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:19.727169 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 7 00:09:19.727181 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 7 00:09:19.735335 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:09:19.739435 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:09:19.745763 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:09:19.779101 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:09:19.792751 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:09:19.807529 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 7 00:09:19.807943 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 7 00:09:19.812985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:20.313346 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:09:20.322781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:20.343919 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Jul 7 00:09:20.378127 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:20.394732 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:09:20.421806 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:09:20.462620 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 7 00:09:20.491728 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:09:20.508643 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:09:20.524745 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:09:20.547586 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:09:20.555001 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 7 00:09:20.555079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:20.555207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:20.563744 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:20.570699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:20.578309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:20.580622 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:09:20.580672 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:09:20.580731 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:20.587398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:20.587635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:20.590760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:20.590920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:20.592139 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:09:20.597603 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:20.599258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:20.600773 systemd-networkd[1252]: lo: Link UP Jul 7 00:09:20.600783 systemd-networkd[1252]: lo: Gained carrier Jul 7 00:09:20.603363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:09:20.605790 systemd-networkd[1252]: Enumeration completed Jul 7 00:09:20.605898 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:09:20.607826 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:20.607837 systemd-networkd[1252]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:20.609014 systemd-networkd[1252]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:20.609025 systemd-networkd[1252]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:20.610097 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:20.610132 systemd-networkd[1252]: eth0: Link UP Jul 7 00:09:20.610135 systemd-networkd[1252]: eth0: Gained carrier Jul 7 00:09:20.610144 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:20.612828 systemd-networkd[1252]: eth1: Link UP Jul 7 00:09:20.612839 systemd-networkd[1252]: eth1: Gained carrier Jul 7 00:09:20.612849 systemd-networkd[1252]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:20.621703 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:09:20.622719 systemd-networkd[1252]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:09:20.632577 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 00:09:20.635903 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 7 00:09:20.636066 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 00:09:20.641595 systemd-networkd[1252]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:09:20.658564 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1250) Jul 7 00:09:20.671564 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 7 00:09:20.669606 systemd-networkd[1252]: eth0: DHCPv4 address 157.180.40.234/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 7 00:09:20.691584 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:09:20.707425 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 00:09:20.716949 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jul 7 00:09:20.719563 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jul 7 00:09:20.723588 kernel: Console: switching to colour dummy device 80x25 Jul 7 00:09:20.723651 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 00:09:20.723663 kernel: [drm] features: -context_init Jul 7 00:09:20.729559 kernel: [drm] number of scanouts: 1 Jul 7 00:09:20.729595 kernel: [drm] number of cap sets: 0 Jul 7 00:09:20.732612 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 7 00:09:20.735032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:20.742569 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 7 00:09:20.742640 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 00:09:20.746119 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:20.746355 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:20.751992 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 7 00:09:20.767749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:20.772430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:20.772692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:20.776805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:20.856452 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:20.871984 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 00:09:20.878735 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 00:09:20.910946 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:09:20.944370 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 00:09:20.945902 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:20.952965 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 00:09:20.967941 lvm[1318]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:09:21.004579 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 00:09:21.007440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:09:21.008464 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:09:21.008506 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:09:21.008672 systemd[1]: Reached target machines.target - Containers. Jul 7 00:09:21.011475 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 00:09:21.024028 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:09:21.027917 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:09:21.030813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:21.038837 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:09:21.043810 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 00:09:21.051725 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:09:21.059869 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:09:21.083119 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:09:21.105447 kernel: loop0: detected capacity change from 0 to 221472 Jul 7 00:09:21.124838 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:09:21.125876 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 00:09:21.156449 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:09:21.189136 kernel: loop1: detected capacity change from 0 to 140768 Jul 7 00:09:21.238598 kernel: loop2: detected capacity change from 0 to 142488 Jul 7 00:09:21.331651 kernel: loop3: detected capacity change from 0 to 8 Jul 7 00:09:21.363060 kernel: loop4: detected capacity change from 0 to 221472 Jul 7 00:09:21.396285 kernel: loop5: detected capacity change from 0 to 140768 Jul 7 00:09:21.425144 kernel: loop6: detected capacity change from 0 to 142488 Jul 7 00:09:21.453591 kernel: loop7: detected capacity change from 0 to 8 Jul 7 00:09:21.458283 (sd-merge)[1339]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 7 00:09:21.459061 (sd-merge)[1339]: Merged extensions into '/usr'. Jul 7 00:09:21.476823 systemd[1]: Reloading requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:09:21.476845 systemd[1]: Reloading... Jul 7 00:09:21.636640 zram_generator::config[1376]: No configuration found. Jul 7 00:09:21.749949 ldconfig[1322]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:09:21.755031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:09:21.826122 systemd[1]: Reloading finished in 348 ms. Jul 7 00:09:21.844031 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:09:21.849253 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:09:21.863803 systemd[1]: Starting ensure-sysext.service... Jul 7 00:09:21.868726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:09:21.876713 systemd[1]: Reloading requested from client PID 1417 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:09:21.876739 systemd[1]: Reloading... Jul 7 00:09:21.908500 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:09:21.908859 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:09:21.910167 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:09:21.910435 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Jul 7 00:09:21.910493 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Jul 7 00:09:21.914062 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:09:21.914077 systemd-tmpfiles[1418]: Skipping /boot Jul 7 00:09:21.922878 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:09:21.922890 systemd-tmpfiles[1418]: Skipping /boot Jul 7 00:09:21.974570 zram_generator::config[1447]: No configuration found. Jul 7 00:09:21.982794 systemd-networkd[1252]: eth1: Gained IPv6LL Jul 7 00:09:22.085186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:09:22.154244 systemd[1]: Reloading finished in 276 ms. Jul 7 00:09:22.169949 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:09:22.192731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:22.225470 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:09:22.240891 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:09:22.248793 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:09:22.263966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:09:22.273985 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:09:22.284770 augenrules[1519]: No rules Jul 7 00:09:22.290557 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:09:22.296194 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:22.296386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:22.303650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:22.319821 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:22.324960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:22.325650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:22.325797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:22.333013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:09:22.343003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:22.343828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:22.350821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:22.350994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:22.359670 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:22.360028 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:22.371248 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:09:22.382753 systemd[1]: Finished ensure-sysext.service. Jul 7 00:09:22.389195 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:22.389352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:22.393704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:22.396404 systemd-resolved[1515]: Positive Trust Anchors: Jul 7 00:09:22.396440 systemd-resolved[1515]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:09:22.396471 systemd-resolved[1515]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:09:22.403931 systemd-resolved[1515]: Using system hostname 'ci-4081-3-4-d-d476fda7c5'. Jul 7 00:09:22.406958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:09:22.412746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:22.421684 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:22.425322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:22.431198 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:09:22.443746 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:09:22.446274 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:22.446803 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:09:22.449361 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:09:22.451549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:22.451699 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:22.453818 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:09:22.453949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:09:22.456226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:22.456362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:22.458492 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:22.458686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:22.468706 systemd[1]: Reached target network.target - Network. Jul 7 00:09:22.471419 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:09:22.475682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:22.476401 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:09:22.476487 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:09:22.476530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:09:22.480907 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:09:22.532613 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:09:22.537602 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:09:22.539971 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:09:22.541776 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:09:22.543521 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:09:22.545289 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:09:22.545314 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:09:22.547120 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:09:22.548884 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:09:22.550053 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:09:22.551973 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:09:22.555407 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:09:22.559535 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:09:22.567445 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:09:22.571790 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:09:22.573853 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:09:22.575489 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:09:22.577579 systemd[1]: System is tainted: cgroupsv1 Jul 7 00:09:22.577633 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:09:22.577669 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:09:22.589713 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:09:22.596480 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:09:22.604722 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:09:22.611674 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:09:22.622850 systemd-networkd[1252]: eth0: Gained IPv6LL Jul 7 00:09:22.624581 coreos-metadata[1567]: Jul 07 00:09:22.624 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 7 00:09:22.626008 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:09:22.634607 coreos-metadata[1567]: Jul 07 00:09:22.631 INFO Fetch successful Jul 7 00:09:22.634607 coreos-metadata[1567]: Jul 07 00:09:22.631 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 7 00:09:22.634607 coreos-metadata[1567]: Jul 07 00:09:22.631 INFO Fetch successful Jul 7 00:09:22.632722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:09:22.636699 jq[1571]: false Jul 7 00:09:22.642807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:09:22.654782 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:09:22.663707 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:09:22.669348 dbus-daemon[1568]: [system] SELinux support is enabled Jul 7 00:09:22.672047 extend-filesystems[1572]: Found loop4 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found loop5 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found loop6 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found loop7 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found sda Jul 7 00:09:22.672047 extend-filesystems[1572]: Found sda1 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found sda2 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found sda3 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found usr Jul 7 00:09:22.672047 extend-filesystems[1572]: Found sda4 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found sda6 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found sda7 Jul 7 00:09:22.672047 extend-filesystems[1572]: Found sda9 Jul 7 00:09:22.672047 extend-filesystems[1572]: Checking size of /dev/sda9 Jul 7 00:09:22.675633 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:09:22.692050 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 7 00:09:23.587908 systemd-timesyncd[1552]: Contacted time server 78.46.87.46:123 (0.flatcar.pool.ntp.org). Jul 7 00:09:23.587965 systemd-timesyncd[1552]: Initial clock synchronization to Mon 2025-07-07 00:09:23.587723 UTC. Jul 7 00:09:23.588538 systemd-resolved[1515]: Clock change detected. Flushing caches. Jul 7 00:09:23.592397 extend-filesystems[1572]: Resized partition /dev/sda9 Jul 7 00:09:23.599798 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:09:23.603677 extend-filesystems[1599]: resize2fs 1.47.1 (20-May-2024) Jul 7 00:09:23.617166 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 7 00:09:23.620465 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:09:23.633872 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:09:23.636681 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:09:23.654935 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:09:23.668399 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1259) Jul 7 00:09:23.672743 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:09:23.693196 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:09:23.715008 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:09:23.715308 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:09:23.722995 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:09:23.723300 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:09:23.733142 update_engine[1608]: I20250707 00:09:23.732986 1608 main.cc:92] Flatcar Update Engine starting Jul 7 00:09:23.746517 update_engine[1608]: I20250707 00:09:23.735973 1608 update_check_scheduler.cc:74] Next update check in 11m38s Jul 7 00:09:23.742848 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:09:23.746674 jq[1611]: true Jul 7 00:09:23.755624 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:09:23.758990 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:09:23.769494 systemd-logind[1604]: New seat seat0. Jul 7 00:09:23.818749 tar[1619]: linux-amd64/helm Jul 7 00:09:23.817437 systemd-logind[1604]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:09:23.835027 jq[1621]: true Jul 7 00:09:23.817455 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:09:23.819821 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:09:23.842053 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:09:23.884460 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:09:23.912044 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:09:23.912333 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:09:23.915146 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:09:23.915256 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:09:23.926713 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:09:23.931906 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:09:24.000755 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:09:24.010170 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:09:24.103649 locksmithd[1656]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:09:24.116532 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 7 00:09:24.141560 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:09:24.147440 extend-filesystems[1599]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 00:09:24.147440 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 7 00:09:24.147440 extend-filesystems[1599]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 7 00:09:24.165683 extend-filesystems[1572]: Resized filesystem in /dev/sda9 Jul 7 00:09:24.165683 extend-filesystems[1572]: Found sr0 Jul 7 00:09:24.185298 bash[1655]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:09:24.158210 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:09:24.158562 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:09:24.165069 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:09:24.182268 systemd[1]: Starting sshkeys.service... Jul 7 00:09:24.226623 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:09:24.255320 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:09:24.261100 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:09:24.280436 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:09:24.309800 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:09:24.310055 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:09:24.333139 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:09:24.356414 coreos-metadata[1683]: Jul 07 00:09:24.356 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 7 00:09:24.369759 coreos-metadata[1683]: Jul 07 00:09:24.369 INFO Fetch successful Jul 7 00:09:24.378828 unknown[1683]: wrote ssh authorized keys file for user: core Jul 7 00:09:24.386250 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:09:24.407294 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:09:24.425038 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:09:24.431020 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:09:24.445298 update-ssh-keys[1699]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:09:24.447248 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:09:24.469382 systemd[1]: Finished sshkeys.service. Jul 7 00:09:24.555939 containerd[1622]: time="2025-07-07T00:09:24.555843814Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 00:09:24.654853 containerd[1622]: time="2025-07-07T00:09:24.654689710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:24.657071 containerd[1622]: time="2025-07-07T00:09:24.656973742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:24.657071 containerd[1622]: time="2025-07-07T00:09:24.657061437Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 00:09:24.657138 containerd[1622]: time="2025-07-07T00:09:24.657094719Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 00:09:24.657365 containerd[1622]: time="2025-07-07T00:09:24.657335380Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 00:09:24.657386 containerd[1622]: time="2025-07-07T00:09:24.657370847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:24.657509 containerd[1622]: time="2025-07-07T00:09:24.657478599Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:24.657509 containerd[1622]: time="2025-07-07T00:09:24.657504096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:24.658292 containerd[1622]: time="2025-07-07T00:09:24.657931879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:24.658292 containerd[1622]: time="2025-07-07T00:09:24.657965612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:24.658292 containerd[1622]: time="2025-07-07T00:09:24.657983054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:24.658292 containerd[1622]: time="2025-07-07T00:09:24.657996610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:24.658292 containerd[1622]: time="2025-07-07T00:09:24.658081690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:24.664012 containerd[1622]: time="2025-07-07T00:09:24.663393278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:24.664012 containerd[1622]: time="2025-07-07T00:09:24.663579547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:24.664012 containerd[1622]: time="2025-07-07T00:09:24.663594044Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 00:09:24.664012 containerd[1622]: time="2025-07-07T00:09:24.663721002Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 00:09:24.664012 containerd[1622]: time="2025-07-07T00:09:24.663765686Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:09:24.672858 containerd[1622]: time="2025-07-07T00:09:24.672800855Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673164216Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673188502Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673206446Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673224970Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673451285Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673861554Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673950109Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673965679Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673979315Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.673993541Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.674009521Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.674023126Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.674037974Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 00:09:24.681911 containerd[1622]: time="2025-07-07T00:09:24.674055307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674077669Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674091795Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674106233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674131700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674147610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674163079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674181764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674197052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674213293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674226979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674243439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674257717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674275510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682332 containerd[1622]: time="2025-07-07T00:09:24.674288454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674301588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674315365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674335051Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674359036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674371019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674382751Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674431873Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674451890Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674462310Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674474924Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674483690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674497396Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674510600Z" level=info msg="NRI interface is disabled by configuration." Jul 7 00:09:24.682617 containerd[1622]: time="2025-07-07T00:09:24.674521491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 00:09:24.684698 containerd[1622]: time="2025-07-07T00:09:24.683199610Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 00:09:24.684698 containerd[1622]: time="2025-07-07T00:09:24.683288156Z" level=info msg="Connect containerd service" Jul 7 00:09:24.684698 containerd[1622]: time="2025-07-07T00:09:24.683358118Z" level=info msg="using legacy CRI server" Jul 7 00:09:24.684698 containerd[1622]: time="2025-07-07T00:09:24.683366554Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:09:24.684698 containerd[1622]: time="2025-07-07T00:09:24.683769389Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 00:09:24.685161 containerd[1622]: time="2025-07-07T00:09:24.685141321Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:09:24.685886 containerd[1622]: time="2025-07-07T00:09:24.685871299Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:09:24.685973 containerd[1622]: time="2025-07-07T00:09:24.685963022Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:09:24.686065 containerd[1622]: time="2025-07-07T00:09:24.686038283Z" level=info msg="Start subscribing containerd event" Jul 7 00:09:24.686117 containerd[1622]: time="2025-07-07T00:09:24.686110207Z" level=info msg="Start recovering state" Jul 7 00:09:24.686215 containerd[1622]: time="2025-07-07T00:09:24.686207220Z" level=info msg="Start event monitor" Jul 7 00:09:24.686260 containerd[1622]: time="2025-07-07T00:09:24.686252825Z" level=info msg="Start snapshots syncer" Jul 7 00:09:24.686299 containerd[1622]: time="2025-07-07T00:09:24.686292188Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:09:24.686343 containerd[1622]: time="2025-07-07T00:09:24.686335720Z" level=info msg="Start streaming server" Jul 7 00:09:24.686902 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:09:24.687810 containerd[1622]: time="2025-07-07T00:09:24.686650951Z" level=info msg="containerd successfully booted in 0.134344s" Jul 7 00:09:25.073438 tar[1619]: linux-amd64/LICENSE Jul 7 00:09:25.073438 tar[1619]: linux-amd64/README.md Jul 7 00:09:25.095403 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:09:25.969812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:09:25.972375 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:09:25.978540 systemd[1]: Startup finished in 8.573s (kernel) + 6.521s (userspace) = 15.094s. Jul 7 00:09:25.985108 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:09:27.069380 kubelet[1725]: E0707 00:09:27.069253 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:09:27.073145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:09:27.073437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:09:37.262366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:09:37.269876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:09:37.421891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:09:37.421989 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:09:37.488933 kubelet[1749]: E0707 00:09:37.488819 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:09:37.493219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:09:37.493554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:09:47.513635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:09:47.527149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:09:47.708848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:09:47.721023 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:09:47.775590 kubelet[1769]: E0707 00:09:47.775253 1769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:09:47.779743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:09:47.779938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:09:58.014235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 00:09:58.022209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:09:58.225843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:09:58.229445 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:09:58.300827 kubelet[1790]: E0707 00:09:58.300621 1790 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:09:58.305522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:09:58.306354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:08.512071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 00:10:08.518918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:08.730901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:08.737784 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:08.788794 kubelet[1810]: E0707 00:10:08.788588 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:08.792991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:08.793164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:08.930581 update_engine[1608]: I20250707 00:10:08.930380 1608 update_attempter.cc:509] Updating boot flags... Jul 7 00:10:09.022747 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1828) Jul 7 00:10:09.097962 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1829) Jul 7 00:10:19.012058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 7 00:10:19.024034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:19.187811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:19.191019 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:19.229815 kubelet[1849]: E0707 00:10:19.229739 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:19.232861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:19.233094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:29.261999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 7 00:10:29.268952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:29.433263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:29.436381 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:29.497119 kubelet[1870]: E0707 00:10:29.496954 1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:29.500165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:29.501284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:39.511939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 7 00:10:39.518955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:39.702877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:39.706873 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:39.785405 kubelet[1889]: E0707 00:10:39.785229 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:39.789627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:39.791182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:50.012062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 7 00:10:50.018914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:50.203861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:50.205550 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:50.273822 kubelet[1909]: E0707 00:10:50.273488 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:50.277600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:50.277886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:11:00.512077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jul 7 00:11:00.519509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:00.712321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:00.715283 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:11:00.752090 kubelet[1930]: E0707 00:11:00.751992 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:11:00.756287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:11:00.756489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:11:07.714209 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:11:07.721206 systemd[1]: Started sshd@0-157.180.40.234:22-147.75.109.163:59462.service - OpenSSH per-connection server daemon (147.75.109.163:59462). Jul 7 00:11:08.765472 sshd[1939]: Accepted publickey for core from 147.75.109.163 port 59462 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:11:08.768745 sshd[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:08.784069 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:11:08.792151 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:11:08.797404 systemd-logind[1604]: New session 1 of user core. Jul 7 00:11:08.831978 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:11:08.840095 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:11:08.857242 (systemd)[1945]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:11:09.014725 systemd[1945]: Queued start job for default target default.target. Jul 7 00:11:09.015105 systemd[1945]: Created slice app.slice - User Application Slice. Jul 7 00:11:09.015123 systemd[1945]: Reached target paths.target - Paths. Jul 7 00:11:09.015135 systemd[1945]: Reached target timers.target - Timers. Jul 7 00:11:09.024749 systemd[1945]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:11:09.036284 systemd[1945]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:11:09.036367 systemd[1945]: Reached target sockets.target - Sockets. Jul 7 00:11:09.036386 systemd[1945]: Reached target basic.target - Basic System. Jul 7 00:11:09.036434 systemd[1945]: Reached target default.target - Main User Target. Jul 7 00:11:09.036489 systemd[1945]: Startup finished in 167ms. Jul 7 00:11:09.036748 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:11:09.040421 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:11:09.763248 systemd[1]: Started sshd@1-157.180.40.234:22-147.75.109.163:59472.service - OpenSSH per-connection server daemon (147.75.109.163:59472). Jul 7 00:11:10.762108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jul 7 00:11:10.772815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:10.788315 sshd[1957]: Accepted publickey for core from 147.75.109.163 port 59472 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:11:10.792918 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:10.811668 systemd-logind[1604]: New session 2 of user core. Jul 7 00:11:10.815894 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:11:10.980972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:10.996375 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:11:11.071192 kubelet[1973]: E0707 00:11:11.070924 1973 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:11:11.075917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:11:11.076155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:11:11.500580 sshd[1957]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:11.506343 systemd[1]: sshd@1-157.180.40.234:22-147.75.109.163:59472.service: Deactivated successfully. Jul 7 00:11:11.511268 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:11:11.512488 systemd-logind[1604]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:11:11.514050 systemd-logind[1604]: Removed session 2. Jul 7 00:11:11.667056 systemd[1]: Started sshd@2-157.180.40.234:22-147.75.109.163:59482.service - OpenSSH per-connection server daemon (147.75.109.163:59482). Jul 7 00:11:12.657559 sshd[1985]: Accepted publickey for core from 147.75.109.163 port 59482 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:11:12.659730 sshd[1985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:12.666769 systemd-logind[1604]: New session 3 of user core. Jul 7 00:11:12.679461 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:11:13.344132 sshd[1985]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:13.351505 systemd[1]: sshd@2-157.180.40.234:22-147.75.109.163:59482.service: Deactivated successfully. Jul 7 00:11:13.352591 systemd-logind[1604]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:11:13.357604 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:11:13.359017 systemd-logind[1604]: Removed session 3. Jul 7 00:11:13.539400 systemd[1]: Started sshd@3-157.180.40.234:22-147.75.109.163:59486.service - OpenSSH per-connection server daemon (147.75.109.163:59486). Jul 7 00:11:14.568335 sshd[1993]: Accepted publickey for core from 147.75.109.163 port 59486 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:11:14.573380 sshd[1993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:14.581526 systemd-logind[1604]: New session 4 of user core. Jul 7 00:11:14.595255 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:11:15.282093 sshd[1993]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:15.289017 systemd[1]: sshd@3-157.180.40.234:22-147.75.109.163:59486.service: Deactivated successfully. Jul 7 00:11:15.289194 systemd-logind[1604]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:11:15.295146 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:11:15.296755 systemd-logind[1604]: Removed session 4. Jul 7 00:11:15.452066 systemd[1]: Started sshd@4-157.180.40.234:22-147.75.109.163:59494.service - OpenSSH per-connection server daemon (147.75.109.163:59494). Jul 7 00:11:16.475649 sshd[2001]: Accepted publickey for core from 147.75.109.163 port 59494 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:11:16.478130 sshd[2001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:16.487914 systemd-logind[1604]: New session 5 of user core. Jul 7 00:11:16.498289 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:11:17.033337 sudo[2005]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:11:17.033900 sudo[2005]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:11:17.053875 sudo[2005]: pam_unix(sudo:session): session closed for user root Jul 7 00:11:17.218497 sshd[2001]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:17.224510 systemd[1]: sshd@4-157.180.40.234:22-147.75.109.163:59494.service: Deactivated successfully. Jul 7 00:11:17.229648 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:11:17.230014 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:11:17.232325 systemd-logind[1604]: Removed session 5. Jul 7 00:11:17.393106 systemd[1]: Started sshd@5-157.180.40.234:22-147.75.109.163:42874.service - OpenSSH per-connection server daemon (147.75.109.163:42874). Jul 7 00:11:18.427786 sshd[2010]: Accepted publickey for core from 147.75.109.163 port 42874 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:11:18.430763 sshd[2010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:18.440410 systemd-logind[1604]: New session 6 of user core. Jul 7 00:11:18.447091 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:11:18.973760 sudo[2015]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:11:18.975081 sudo[2015]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:11:18.981161 sudo[2015]: pam_unix(sudo:session): session closed for user root Jul 7 00:11:18.990330 sudo[2014]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 00:11:18.990820 sudo[2014]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:11:19.013791 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 00:11:19.017756 auditctl[2018]: No rules Jul 7 00:11:19.018187 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:11:19.018546 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 00:11:19.028051 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:11:19.060615 augenrules[2037]: No rules Jul 7 00:11:19.061855 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:11:19.065380 sudo[2014]: pam_unix(sudo:session): session closed for user root Jul 7 00:11:19.231480 sshd[2010]: pam_unix(sshd:session): session closed for user core Jul 7 00:11:19.237514 systemd[1]: sshd@5-157.180.40.234:22-147.75.109.163:42874.service: Deactivated successfully. Jul 7 00:11:19.242410 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:11:19.242973 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:11:19.245007 systemd-logind[1604]: Removed session 6. Jul 7 00:11:19.404082 systemd[1]: Started sshd@6-157.180.40.234:22-147.75.109.163:42890.service - OpenSSH per-connection server daemon (147.75.109.163:42890). Jul 7 00:11:20.438009 sshd[2046]: Accepted publickey for core from 147.75.109.163 port 42890 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:11:20.440395 sshd[2046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:11:20.449403 systemd-logind[1604]: New session 7 of user core. Jul 7 00:11:20.460171 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:11:20.982626 sudo[2050]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:11:20.983239 sudo[2050]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:11:21.261994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jul 7 00:11:21.271380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:21.487882 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:11:21.488727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:21.498050 (dockerd)[2078]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:11:21.498068 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:11:21.580966 kubelet[2079]: E0707 00:11:21.580199 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:11:21.582384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:11:21.582546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:11:22.382808 dockerd[2078]: time="2025-07-07T00:11:22.382623129Z" level=info msg="Starting up" Jul 7 00:11:22.649824 dockerd[2078]: time="2025-07-07T00:11:22.649034507Z" level=info msg="Loading containers: start." Jul 7 00:11:22.815752 kernel: Initializing XFRM netlink socket Jul 7 00:11:22.938374 systemd-networkd[1252]: docker0: Link UP Jul 7 00:11:22.963377 dockerd[2078]: time="2025-07-07T00:11:22.963296978Z" level=info msg="Loading containers: done." Jul 7 00:11:23.002053 dockerd[2078]: time="2025-07-07T00:11:23.001926411Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:11:23.002457 dockerd[2078]: time="2025-07-07T00:11:23.002217386Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 00:11:23.002514 dockerd[2078]: time="2025-07-07T00:11:23.002466513Z" level=info msg="Daemon has completed initialization" Jul 7 00:11:23.057255 dockerd[2078]: time="2025-07-07T00:11:23.057114325Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:11:23.058061 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:11:24.682817 containerd[1622]: time="2025-07-07T00:11:24.682744851Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 00:11:25.419375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451702605.mount: Deactivated successfully. Jul 7 00:11:26.779638 containerd[1622]: time="2025-07-07T00:11:26.779552876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:26.781127 containerd[1622]: time="2025-07-07T00:11:26.781080811Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077838" Jul 7 00:11:26.782743 containerd[1622]: time="2025-07-07T00:11:26.782679759Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:26.786336 containerd[1622]: time="2025-07-07T00:11:26.786254053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:26.787725 containerd[1622]: time="2025-07-07T00:11:26.787333096Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.104525988s" Jul 7 00:11:26.787725 containerd[1622]: time="2025-07-07T00:11:26.787388279Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 00:11:26.791017 containerd[1622]: time="2025-07-07T00:11:26.790980846Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 00:11:28.208786 containerd[1622]: time="2025-07-07T00:11:28.208680115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:28.210396 containerd[1622]: time="2025-07-07T00:11:28.210346140Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713316" Jul 7 00:11:28.212052 containerd[1622]: time="2025-07-07T00:11:28.212006193Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:28.215468 containerd[1622]: time="2025-07-07T00:11:28.215389617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:28.216598 containerd[1622]: time="2025-07-07T00:11:28.216453252Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.425435607s" Jul 7 00:11:28.216598 containerd[1622]: time="2025-07-07T00:11:28.216486585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 00:11:28.217468 containerd[1622]: time="2025-07-07T00:11:28.217318855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 00:11:29.278492 containerd[1622]: time="2025-07-07T00:11:29.278424777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:29.280100 containerd[1622]: time="2025-07-07T00:11:29.280031910Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783693" Jul 7 00:11:29.281865 containerd[1622]: time="2025-07-07T00:11:29.281823771Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:29.286113 containerd[1622]: time="2025-07-07T00:11:29.285021958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:29.286113 containerd[1622]: time="2025-07-07T00:11:29.285996806Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.068644978s" Jul 7 00:11:29.286113 containerd[1622]: time="2025-07-07T00:11:29.286024468Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 00:11:29.286722 containerd[1622]: time="2025-07-07T00:11:29.286570452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 00:11:30.536820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697108422.mount: Deactivated successfully. Jul 7 00:11:30.902595 containerd[1622]: time="2025-07-07T00:11:30.902389131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:30.903678 containerd[1622]: time="2025-07-07T00:11:30.903582509Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383971" Jul 7 00:11:30.904912 containerd[1622]: time="2025-07-07T00:11:30.904855005Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:30.907144 containerd[1622]: time="2025-07-07T00:11:30.907092350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:30.908216 containerd[1622]: time="2025-07-07T00:11:30.907779570Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.62099084s" Jul 7 00:11:30.908216 containerd[1622]: time="2025-07-07T00:11:30.907822971Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 00:11:30.908538 containerd[1622]: time="2025-07-07T00:11:30.908486455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:11:31.504226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592685909.mount: Deactivated successfully. Jul 7 00:11:31.761429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jul 7 00:11:31.775187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:31.938856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:31.960040 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:11:32.028249 kubelet[2318]: E0707 00:11:32.028004 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:11:32.031525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:11:32.031837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:11:32.712215 containerd[1622]: time="2025-07-07T00:11:32.712095552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:32.714169 containerd[1622]: time="2025-07-07T00:11:32.714118786Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jul 7 00:11:32.716114 containerd[1622]: time="2025-07-07T00:11:32.716053344Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:32.722679 containerd[1622]: time="2025-07-07T00:11:32.722596754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:32.725052 containerd[1622]: time="2025-07-07T00:11:32.724421977Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.815895938s" Jul 7 00:11:32.725052 containerd[1622]: time="2025-07-07T00:11:32.724465768Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:11:32.726121 containerd[1622]: time="2025-07-07T00:11:32.726092419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:11:33.224316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738628211.mount: Deactivated successfully. Jul 7 00:11:33.233765 containerd[1622]: time="2025-07-07T00:11:33.233636245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:33.234936 containerd[1622]: time="2025-07-07T00:11:33.234887743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jul 7 00:11:33.236640 containerd[1622]: time="2025-07-07T00:11:33.236594723Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:33.240319 containerd[1622]: time="2025-07-07T00:11:33.240236953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:33.242001 containerd[1622]: time="2025-07-07T00:11:33.241000474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 514.87355ms" Jul 7 00:11:33.242001 containerd[1622]: time="2025-07-07T00:11:33.241037403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:11:33.242157 containerd[1622]: time="2025-07-07T00:11:33.242049852Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 00:11:33.805827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216663816.mount: Deactivated successfully. Jul 7 00:11:34.247604 systemd[1]: Started sshd@7-157.180.40.234:22-80.94.95.116:50636.service - OpenSSH per-connection server daemon (80.94.95.116:50636). Jul 7 00:11:35.338535 containerd[1622]: time="2025-07-07T00:11:35.338448875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:35.340067 containerd[1622]: time="2025-07-07T00:11:35.340023297Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780083" Jul 7 00:11:35.341253 containerd[1622]: time="2025-07-07T00:11:35.341212347Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:35.344590 containerd[1622]: time="2025-07-07T00:11:35.344553953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:35.345753 containerd[1622]: time="2025-07-07T00:11:35.345580509Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.10350567s" Jul 7 00:11:35.345753 containerd[1622]: time="2025-07-07T00:11:35.345611076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 00:11:35.826201 sshd[2410]: Connection closed by authenticating user root 80.94.95.116 port 50636 [preauth] Jul 7 00:11:35.830638 systemd[1]: sshd@7-157.180.40.234:22-80.94.95.116:50636.service: Deactivated successfully. Jul 7 00:11:37.912995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:37.924031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:37.978474 systemd[1]: Reloading requested from client PID 2459 ('systemctl') (unit session-7.scope)... Jul 7 00:11:37.978500 systemd[1]: Reloading... Jul 7 00:11:38.128748 zram_generator::config[2496]: No configuration found. Jul 7 00:11:38.228093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:11:38.310529 systemd[1]: Reloading finished in 331 ms. Jul 7 00:11:38.348197 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:11:38.348255 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:11:38.348696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:38.355110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:38.487908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:38.498206 (kubelet)[2563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:11:38.546113 kubelet[2563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:11:38.546113 kubelet[2563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:11:38.546113 kubelet[2563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:11:38.546871 kubelet[2563]: I0707 00:11:38.546151 2563 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:11:39.051366 kubelet[2563]: I0707 00:11:39.051294 2563 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:11:39.051366 kubelet[2563]: I0707 00:11:39.051335 2563 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:11:39.053488 kubelet[2563]: I0707 00:11:39.053454 2563 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:11:39.083603 kubelet[2563]: I0707 00:11:39.083518 2563 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:11:39.086810 kubelet[2563]: E0707 00:11:39.086756 2563 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://157.180.40.234:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:39.094318 kubelet[2563]: E0707 00:11:39.094276 2563 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:11:39.094318 kubelet[2563]: I0707 00:11:39.094312 2563 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:11:39.110701 kubelet[2563]: I0707 00:11:39.110361 2563 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:11:39.112633 kubelet[2563]: I0707 00:11:39.112590 2563 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:11:39.112779 kubelet[2563]: I0707 00:11:39.112741 2563 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:11:39.113017 kubelet[2563]: I0707 00:11:39.112775 2563 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-d-d476fda7c5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 00:11:39.113017 kubelet[2563]: I0707 00:11:39.113013 2563 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:11:39.113213 kubelet[2563]: I0707 00:11:39.113022 2563 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:11:39.113213 kubelet[2563]: I0707 00:11:39.113149 2563 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:11:39.116249 kubelet[2563]: I0707 00:11:39.116190 2563 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:11:39.116249 kubelet[2563]: I0707 00:11:39.116213 2563 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:11:39.119374 kubelet[2563]: I0707 00:11:39.119198 2563 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:11:39.119374 kubelet[2563]: I0707 00:11:39.119220 2563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:11:39.120327 kubelet[2563]: W0707 00:11:39.119891 2563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.40.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-d-d476fda7c5&limit=500&resourceVersion=0": dial tcp 157.180.40.234:6443: connect: connection refused Jul 7 00:11:39.120327 kubelet[2563]: E0707 00:11:39.120005 2563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.40.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-d-d476fda7c5&limit=500&resourceVersion=0\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:39.123479 kubelet[2563]: W0707 00:11:39.123020 2563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.40.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.40.234:6443: connect: connection refused Jul 7 00:11:39.123479 kubelet[2563]: E0707 00:11:39.123054 2563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.40.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:39.123638 kubelet[2563]: I0707 00:11:39.123627 2563 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:11:39.126890 kubelet[2563]: I0707 00:11:39.126877 2563 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:11:39.127782 kubelet[2563]: W0707 00:11:39.127771 2563 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:11:39.129284 kubelet[2563]: I0707 00:11:39.129270 2563 server.go:1274] "Started kubelet" Jul 7 00:11:39.130446 kubelet[2563]: I0707 00:11:39.130359 2563 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:11:39.132574 kubelet[2563]: I0707 00:11:39.132338 2563 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:11:39.132817 kubelet[2563]: I0707 00:11:39.132785 2563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:11:39.133211 kubelet[2563]: I0707 00:11:39.133200 2563 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:11:39.134882 kubelet[2563]: E0707 00:11:39.133420 2563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.40.234:6443/api/v1/namespaces/default/events\": dial tcp 157.180.40.234:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-d-d476fda7c5.184fcfa7ac65aa33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-d-d476fda7c5,UID:ci-4081-3-4-d-d476fda7c5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-d-d476fda7c5,},FirstTimestamp:2025-07-07 00:11:39.129244211 +0000 UTC m=+0.626776481,LastTimestamp:2025-07-07 00:11:39.129244211 +0000 UTC m=+0.626776481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-d-d476fda7c5,}" Jul 7 00:11:39.139924 kubelet[2563]: I0707 00:11:39.139744 2563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:11:39.139976 kubelet[2563]: I0707 00:11:39.139936 2563 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:11:39.144965 kubelet[2563]: I0707 00:11:39.144947 2563 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:11:39.145240 kubelet[2563]: E0707 00:11:39.145226 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:39.147214 kubelet[2563]: I0707 00:11:39.147201 2563 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:11:39.147405 kubelet[2563]: I0707 00:11:39.147317 2563 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:11:39.148317 kubelet[2563]: I0707 00:11:39.148285 2563 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:11:39.148694 kubelet[2563]: I0707 00:11:39.148433 2563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:11:39.149398 kubelet[2563]: E0707 00:11:39.149372 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.40.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-d-d476fda7c5?timeout=10s\": dial tcp 157.180.40.234:6443: connect: connection refused" interval="200ms" Jul 7 00:11:39.149905 kubelet[2563]: E0707 00:11:39.149888 2563 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:11:39.151043 kubelet[2563]: W0707 00:11:39.150994 2563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.40.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.40.234:6443: connect: connection refused Jul 7 00:11:39.151509 kubelet[2563]: E0707 00:11:39.151485 2563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.40.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:39.152688 kubelet[2563]: I0707 00:11:39.152528 2563 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:11:39.157977 kubelet[2563]: I0707 00:11:39.157881 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:11:39.158731 kubelet[2563]: I0707 00:11:39.158706 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:11:39.158822 kubelet[2563]: I0707 00:11:39.158805 2563 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:11:39.160370 kubelet[2563]: I0707 00:11:39.160055 2563 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:11:39.160370 kubelet[2563]: E0707 00:11:39.160103 2563 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:11:39.178869 kubelet[2563]: W0707 00:11:39.178797 2563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.40.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.40.234:6443: connect: connection refused Jul 7 00:11:39.179263 kubelet[2563]: E0707 00:11:39.179089 2563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.40.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:39.193492 kubelet[2563]: I0707 00:11:39.193471 2563 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:11:39.193682 kubelet[2563]: I0707 00:11:39.193591 2563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:11:39.193682 kubelet[2563]: I0707 00:11:39.193609 2563 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:11:39.197206 kubelet[2563]: I0707 00:11:39.197139 2563 policy_none.go:49] "None policy: Start" Jul 7 00:11:39.198054 kubelet[2563]: I0707 00:11:39.198011 2563 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:11:39.198313 kubelet[2563]: I0707 00:11:39.198125 2563 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:11:39.203415 kubelet[2563]: I0707 00:11:39.203398 2563 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:11:39.205230 kubelet[2563]: I0707 00:11:39.203647 2563 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:11:39.205230 kubelet[2563]: I0707 00:11:39.203682 2563 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:11:39.205230 kubelet[2563]: I0707 00:11:39.204942 2563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:11:39.207902 kubelet[2563]: E0707 00:11:39.207854 2563 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:39.307630 kubelet[2563]: I0707 00:11:39.307424 2563 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.309695 kubelet[2563]: E0707 00:11:39.307996 2563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.40.234:6443/api/v1/nodes\": dial tcp 157.180.40.234:6443: connect: connection refused" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.351471 kubelet[2563]: E0707 00:11:39.351359 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.40.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-d-d476fda7c5?timeout=10s\": dial tcp 157.180.40.234:6443: connect: connection refused" interval="400ms" Jul 7 00:11:39.448965 kubelet[2563]: I0707 00:11:39.448826 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.448965 kubelet[2563]: I0707 00:11:39.448917 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.448965 kubelet[2563]: I0707 00:11:39.448953 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.448965 kubelet[2563]: I0707 00:11:39.448988 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64c2aace412436450b889fc7b925cddd-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-d-d476fda7c5\" (UID: \"64c2aace412436450b889fc7b925cddd\") " pod="kube-system/kube-apiserver-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.449493 kubelet[2563]: I0707 00:11:39.449022 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64c2aace412436450b889fc7b925cddd-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-d-d476fda7c5\" (UID: \"64c2aace412436450b889fc7b925cddd\") " pod="kube-system/kube-apiserver-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.449493 kubelet[2563]: I0707 00:11:39.449061 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64c2aace412436450b889fc7b925cddd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-d-d476fda7c5\" (UID: \"64c2aace412436450b889fc7b925cddd\") " pod="kube-system/kube-apiserver-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.449493 kubelet[2563]: I0707 00:11:39.449094 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.449493 kubelet[2563]: I0707 00:11:39.449166 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.449493 kubelet[2563]: I0707 00:11:39.449216 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/661c04d2b9328dae0bb6973f07238939-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-d-d476fda7c5\" (UID: \"661c04d2b9328dae0bb6973f07238939\") " pod="kube-system/kube-scheduler-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.511868 kubelet[2563]: I0707 00:11:39.511793 2563 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.512808 kubelet[2563]: E0707 00:11:39.512706 2563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.40.234:6443/api/v1/nodes\": dial tcp 157.180.40.234:6443: connect: connection refused" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.576332 containerd[1622]: time="2025-07-07T00:11:39.575915834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-d-d476fda7c5,Uid:7823967d7b40ba74972187525f975c7f,Namespace:kube-system,Attempt:0,}" Jul 7 00:11:39.586012 containerd[1622]: time="2025-07-07T00:11:39.585297386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-d-d476fda7c5,Uid:661c04d2b9328dae0bb6973f07238939,Namespace:kube-system,Attempt:0,}" Jul 7 00:11:39.586214 containerd[1622]: time="2025-07-07T00:11:39.585329096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-d-d476fda7c5,Uid:64c2aace412436450b889fc7b925cddd,Namespace:kube-system,Attempt:0,}" Jul 7 00:11:39.752222 kubelet[2563]: E0707 00:11:39.752131 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.40.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-d-d476fda7c5?timeout=10s\": dial tcp 157.180.40.234:6443: connect: connection refused" interval="800ms" Jul 7 00:11:39.916028 kubelet[2563]: I0707 00:11:39.915853 2563 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.916911 kubelet[2563]: E0707 00:11:39.916779 2563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.40.234:6443/api/v1/nodes\": dial tcp 157.180.40.234:6443: connect: connection refused" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:39.941541 kubelet[2563]: W0707 00:11:39.941408 2563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.40.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.40.234:6443: connect: connection refused Jul 7 00:11:39.941759 kubelet[2563]: E0707 00:11:39.941550 2563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.40.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:40.093284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271657156.mount: Deactivated successfully. Jul 7 00:11:40.107371 containerd[1622]: time="2025-07-07T00:11:40.107267675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:11:40.109099 containerd[1622]: time="2025-07-07T00:11:40.109023768Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:11:40.110740 containerd[1622]: time="2025-07-07T00:11:40.110609080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:11:40.112192 containerd[1622]: time="2025-07-07T00:11:40.112102692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:11:40.113091 containerd[1622]: time="2025-07-07T00:11:40.113037244Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:11:40.120705 containerd[1622]: time="2025-07-07T00:11:40.120249798Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:11:40.120705 containerd[1622]: time="2025-07-07T00:11:40.120371446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jul 7 00:11:40.126316 containerd[1622]: time="2025-07-07T00:11:40.126271770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:11:40.129301 containerd[1622]: time="2025-07-07T00:11:40.129259693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.190002ms" Jul 7 00:11:40.134861 containerd[1622]: time="2025-07-07T00:11:40.134632858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.204877ms" Jul 7 00:11:40.139536 containerd[1622]: time="2025-07-07T00:11:40.139467504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.268237ms" Jul 7 00:11:40.337885 containerd[1622]: time="2025-07-07T00:11:40.337535409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:40.337885 containerd[1622]: time="2025-07-07T00:11:40.337588057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:40.337885 containerd[1622]: time="2025-07-07T00:11:40.337608876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:40.337885 containerd[1622]: time="2025-07-07T00:11:40.337841723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:40.342407 containerd[1622]: time="2025-07-07T00:11:40.342083366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:40.342407 containerd[1622]: time="2025-07-07T00:11:40.342136706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:40.342407 containerd[1622]: time="2025-07-07T00:11:40.342150542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:40.342407 containerd[1622]: time="2025-07-07T00:11:40.342254607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:40.348505 containerd[1622]: time="2025-07-07T00:11:40.347892369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:40.348505 containerd[1622]: time="2025-07-07T00:11:40.348021712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:40.348505 containerd[1622]: time="2025-07-07T00:11:40.348078127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:40.369704 kubelet[2563]: E0707 00:11:40.369270 2563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.40.234:6443/api/v1/namespaces/default/events\": dial tcp 157.180.40.234:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-d-d476fda7c5.184fcfa7ac65aa33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-d-d476fda7c5,UID:ci-4081-3-4-d-d476fda7c5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-d-d476fda7c5,},FirstTimestamp:2025-07-07 00:11:39.129244211 +0000 UTC m=+0.626776481,LastTimestamp:2025-07-07 00:11:39.129244211 +0000 UTC m=+0.626776481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-d-d476fda7c5,}" Jul 7 00:11:40.370805 containerd[1622]: time="2025-07-07T00:11:40.370443616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:40.397483 kubelet[2563]: W0707 00:11:40.397360 2563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.40.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.40.234:6443: connect: connection refused Jul 7 00:11:40.397483 kubelet[2563]: E0707 00:11:40.397451 2563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.40.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:40.419460 kubelet[2563]: W0707 00:11:40.419378 2563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.40.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-d-d476fda7c5&limit=500&resourceVersion=0": dial tcp 157.180.40.234:6443: connect: connection refused Jul 7 00:11:40.419460 kubelet[2563]: E0707 00:11:40.419470 2563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.40.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-d-d476fda7c5&limit=500&resourceVersion=0\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:40.474195 containerd[1622]: time="2025-07-07T00:11:40.474144209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-d-d476fda7c5,Uid:7823967d7b40ba74972187525f975c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"05fa00a7ba2aa5b4a68668a5004e6459282bafd60493348d81c35bd3479f5e92\"" Jul 7 00:11:40.481322 containerd[1622]: time="2025-07-07T00:11:40.481144545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-d-d476fda7c5,Uid:661c04d2b9328dae0bb6973f07238939,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca8405d7f1184b4a363f7189af8e1a28f46f8a0c799fa35531b387804aa62336\"" Jul 7 00:11:40.481602 containerd[1622]: time="2025-07-07T00:11:40.481394754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-d-d476fda7c5,Uid:64c2aace412436450b889fc7b925cddd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5fb173abed8a534c4cd934588bc0cdd4b2efb96fbe30207e64f06a612a70bed\"" Jul 7 00:11:40.486024 containerd[1622]: time="2025-07-07T00:11:40.485974542Z" level=info msg="CreateContainer within sandbox \"c5fb173abed8a534c4cd934588bc0cdd4b2efb96fbe30207e64f06a612a70bed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:11:40.486083 containerd[1622]: time="2025-07-07T00:11:40.485977928Z" level=info msg="CreateContainer within sandbox \"05fa00a7ba2aa5b4a68668a5004e6459282bafd60493348d81c35bd3479f5e92\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:11:40.486680 containerd[1622]: time="2025-07-07T00:11:40.486547356Z" level=info msg="CreateContainer within sandbox \"ca8405d7f1184b4a363f7189af8e1a28f46f8a0c799fa35531b387804aa62336\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:11:40.506879 containerd[1622]: time="2025-07-07T00:11:40.506829849Z" level=info msg="CreateContainer within sandbox \"05fa00a7ba2aa5b4a68668a5004e6459282bafd60493348d81c35bd3479f5e92\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a03205ffbe632678317bbc69e5bdb4918dcdb09ae72c7d30a16dcc7800a28e7\"" Jul 7 00:11:40.508223 containerd[1622]: time="2025-07-07T00:11:40.508102775Z" level=info msg="StartContainer for \"7a03205ffbe632678317bbc69e5bdb4918dcdb09ae72c7d30a16dcc7800a28e7\"" Jul 7 00:11:40.525075 containerd[1622]: time="2025-07-07T00:11:40.524967018Z" level=info msg="CreateContainer within sandbox \"ca8405d7f1184b4a363f7189af8e1a28f46f8a0c799fa35531b387804aa62336\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f62e91e689f08b89e9bee9decdf0793e953f199bbc49ff4fcc3b532504866828\"" Jul 7 00:11:40.526089 containerd[1622]: time="2025-07-07T00:11:40.526013771Z" level=info msg="StartContainer for \"f62e91e689f08b89e9bee9decdf0793e953f199bbc49ff4fcc3b532504866828\"" Jul 7 00:11:40.538375 containerd[1622]: time="2025-07-07T00:11:40.538300310Z" level=info msg="CreateContainer within sandbox \"c5fb173abed8a534c4cd934588bc0cdd4b2efb96fbe30207e64f06a612a70bed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a2c69a54326123744ae9634a7b789bb51644032c898323a822c47f95382d2c8\"" Jul 7 00:11:40.540213 containerd[1622]: time="2025-07-07T00:11:40.539761139Z" level=info msg="StartContainer for \"1a2c69a54326123744ae9634a7b789bb51644032c898323a822c47f95382d2c8\"" Jul 7 00:11:40.557825 kubelet[2563]: E0707 00:11:40.557755 2563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.40.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-d-d476fda7c5?timeout=10s\": dial tcp 157.180.40.234:6443: connect: connection refused" interval="1.6s" Jul 7 00:11:40.608789 containerd[1622]: time="2025-07-07T00:11:40.607824067Z" level=info msg="StartContainer for \"7a03205ffbe632678317bbc69e5bdb4918dcdb09ae72c7d30a16dcc7800a28e7\" returns successfully" Jul 7 00:11:40.652446 containerd[1622]: time="2025-07-07T00:11:40.652373413Z" level=info msg="StartContainer for \"1a2c69a54326123744ae9634a7b789bb51644032c898323a822c47f95382d2c8\" returns successfully" Jul 7 00:11:40.657037 kubelet[2563]: W0707 00:11:40.656990 2563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.40.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.40.234:6443: connect: connection refused Jul 7 00:11:40.657141 kubelet[2563]: E0707 00:11:40.657065 2563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.40.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.40.234:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:11:40.690982 containerd[1622]: time="2025-07-07T00:11:40.690898553Z" level=info msg="StartContainer for \"f62e91e689f08b89e9bee9decdf0793e953f199bbc49ff4fcc3b532504866828\" returns successfully" Jul 7 00:11:40.723210 kubelet[2563]: I0707 00:11:40.722993 2563 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:40.725760 kubelet[2563]: E0707 00:11:40.725704 2563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://157.180.40.234:6443/api/v1/nodes\": dial tcp 157.180.40.234:6443: connect: connection refused" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:42.332371 kubelet[2563]: I0707 00:11:42.332315 2563 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:42.505299 kubelet[2563]: I0707 00:11:42.503501 2563 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:42.505299 kubelet[2563]: E0707 00:11:42.503581 2563 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-d-d476fda7c5\": node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:42.536978 kubelet[2563]: E0707 00:11:42.536926 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:42.638045 kubelet[2563]: E0707 00:11:42.637798 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:42.739100 kubelet[2563]: E0707 00:11:42.738980 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:42.839970 kubelet[2563]: E0707 00:11:42.839893 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:42.941345 kubelet[2563]: E0707 00:11:42.940823 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:43.042089 kubelet[2563]: E0707 00:11:43.041958 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:43.142937 kubelet[2563]: E0707 00:11:43.142855 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-d-d476fda7c5\" not found" Jul 7 00:11:44.122509 kubelet[2563]: I0707 00:11:44.122418 2563 apiserver.go:52] "Watching apiserver" Jul 7 00:11:44.147986 kubelet[2563]: I0707 00:11:44.147929 2563 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:11:44.955244 systemd[1]: Reloading requested from client PID 2833 ('systemctl') (unit session-7.scope)... Jul 7 00:11:44.955268 systemd[1]: Reloading... Jul 7 00:11:45.070872 zram_generator::config[2873]: No configuration found. Jul 7 00:11:45.191565 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:11:45.280850 systemd[1]: Reloading finished in 325 ms. Jul 7 00:11:45.317181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:45.338381 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:11:45.338992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:45.346136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:45.523427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:45.536296 (kubelet)[2934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:11:45.598682 kubelet[2934]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:11:45.598682 kubelet[2934]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:11:45.598682 kubelet[2934]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:11:45.599262 kubelet[2934]: I0707 00:11:45.598763 2934 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:11:45.606759 kubelet[2934]: I0707 00:11:45.606707 2934 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:11:45.606759 kubelet[2934]: I0707 00:11:45.606743 2934 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:11:45.606968 kubelet[2934]: I0707 00:11:45.606943 2934 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:11:45.608925 kubelet[2934]: I0707 00:11:45.608895 2934 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:11:45.611104 kubelet[2934]: I0707 00:11:45.611075 2934 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:11:45.617027 kubelet[2934]: E0707 00:11:45.616987 2934 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:11:45.617027 kubelet[2934]: I0707 00:11:45.617021 2934 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:11:45.623211 kubelet[2934]: I0707 00:11:45.623168 2934 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:11:45.623990 kubelet[2934]: I0707 00:11:45.623946 2934 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:11:45.624080 kubelet[2934]: I0707 00:11:45.624054 2934 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:11:45.624324 kubelet[2934]: I0707 00:11:45.624084 2934 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-d-d476fda7c5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 00:11:45.624324 kubelet[2934]: I0707 00:11:45.624320 2934 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:11:45.624457 kubelet[2934]: I0707 00:11:45.624331 2934 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:11:45.624457 kubelet[2934]: I0707 00:11:45.624362 2934 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:11:45.624498 kubelet[2934]: I0707 00:11:45.624466 2934 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:11:45.624498 kubelet[2934]: I0707 00:11:45.624482 2934 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:11:45.624538 kubelet[2934]: I0707 00:11:45.624514 2934 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:11:45.624538 kubelet[2934]: I0707 00:11:45.624527 2934 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:11:45.626678 kubelet[2934]: I0707 00:11:45.626564 2934 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:11:45.629124 kubelet[2934]: I0707 00:11:45.629101 2934 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:11:45.629564 kubelet[2934]: I0707 00:11:45.629544 2934 server.go:1274] "Started kubelet" Jul 7 00:11:45.636934 kubelet[2934]: I0707 00:11:45.636864 2934 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:11:45.640733 kubelet[2934]: I0707 00:11:45.640397 2934 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:11:45.640733 kubelet[2934]: I0707 00:11:45.640558 2934 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:11:45.643332 kubelet[2934]: I0707 00:11:45.643261 2934 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:11:45.645984 kubelet[2934]: I0707 00:11:45.645945 2934 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:11:45.646694 kubelet[2934]: I0707 00:11:45.646549 2934 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:11:45.649406 kubelet[2934]: I0707 00:11:45.648764 2934 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:11:45.651091 kubelet[2934]: I0707 00:11:45.650698 2934 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:11:45.651091 kubelet[2934]: I0707 00:11:45.650869 2934 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:11:45.652536 kubelet[2934]: I0707 00:11:45.652523 2934 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:11:45.653147 kubelet[2934]: I0707 00:11:45.653129 2934 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:11:45.656085 kubelet[2934]: I0707 00:11:45.655982 2934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:11:45.656761 kubelet[2934]: I0707 00:11:45.656737 2934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:11:45.656761 kubelet[2934]: I0707 00:11:45.656762 2934 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:11:45.656824 kubelet[2934]: I0707 00:11:45.656783 2934 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:11:45.656846 kubelet[2934]: E0707 00:11:45.656823 2934 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:11:45.657979 kubelet[2934]: I0707 00:11:45.657968 2934 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:11:45.663420 kubelet[2934]: E0707 00:11:45.663378 2934 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:11:45.739950 kubelet[2934]: I0707 00:11:45.739890 2934 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:11:45.739950 kubelet[2934]: I0707 00:11:45.739924 2934 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:11:45.739950 kubelet[2934]: I0707 00:11:45.739949 2934 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:11:45.740590 kubelet[2934]: I0707 00:11:45.740142 2934 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:11:45.740590 kubelet[2934]: I0707 00:11:45.740153 2934 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:11:45.740590 kubelet[2934]: I0707 00:11:45.740180 2934 policy_none.go:49] "None policy: Start" Jul 7 00:11:45.741040 kubelet[2934]: I0707 00:11:45.741002 2934 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:11:45.741040 kubelet[2934]: I0707 00:11:45.741027 2934 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:11:45.741220 kubelet[2934]: I0707 00:11:45.741186 2934 state_mem.go:75] "Updated machine memory state" Jul 7 00:11:45.746733 kubelet[2934]: I0707 00:11:45.746674 2934 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:11:45.747001 kubelet[2934]: I0707 00:11:45.746975 2934 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:11:45.747041 kubelet[2934]: I0707 00:11:45.746994 2934 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:11:45.748453 kubelet[2934]: I0707 00:11:45.747839 2934 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:11:45.771726 kubelet[2934]: E0707 00:11:45.771648 2934 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.859769 kubelet[2934]: I0707 00:11:45.859203 2934 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.870330 kubelet[2934]: I0707 00:11:45.870251 2934 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.870517 kubelet[2934]: I0707 00:11:45.870382 2934 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953225 kubelet[2934]: I0707 00:11:45.952486 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64c2aace412436450b889fc7b925cddd-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-d-d476fda7c5\" (UID: \"64c2aace412436450b889fc7b925cddd\") " pod="kube-system/kube-apiserver-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953225 kubelet[2934]: I0707 00:11:45.952595 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64c2aace412436450b889fc7b925cddd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-d-d476fda7c5\" (UID: \"64c2aace412436450b889fc7b925cddd\") " pod="kube-system/kube-apiserver-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953225 kubelet[2934]: I0707 00:11:45.952647 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953225 kubelet[2934]: I0707 00:11:45.952737 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953225 kubelet[2934]: I0707 00:11:45.952789 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953641 kubelet[2934]: I0707 00:11:45.952838 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953641 kubelet[2934]: I0707 00:11:45.952882 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64c2aace412436450b889fc7b925cddd-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-d-d476fda7c5\" (UID: \"64c2aace412436450b889fc7b925cddd\") " pod="kube-system/kube-apiserver-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953641 kubelet[2934]: I0707 00:11:45.952945 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7823967d7b40ba74972187525f975c7f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-d-d476fda7c5\" (UID: \"7823967d7b40ba74972187525f975c7f\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:45.953641 kubelet[2934]: I0707 00:11:45.952995 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/661c04d2b9328dae0bb6973f07238939-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-d-d476fda7c5\" (UID: \"661c04d2b9328dae0bb6973f07238939\") " pod="kube-system/kube-scheduler-ci-4081-3-4-d-d476fda7c5" Jul 7 00:11:46.625939 kubelet[2934]: I0707 00:11:46.625824 2934 apiserver.go:52] "Watching apiserver" Jul 7 00:11:46.651073 kubelet[2934]: I0707 00:11:46.650983 2934 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:11:46.769217 kubelet[2934]: I0707 00:11:46.768765 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-d-d476fda7c5" podStartSLOduration=1.768741187 podStartE2EDuration="1.768741187s" podCreationTimestamp="2025-07-07 00:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:11:46.759542248 +0000 UTC m=+1.215077411" watchObservedRunningTime="2025-07-07 00:11:46.768741187 +0000 UTC m=+1.224276350" Jul 7 00:11:46.780910 kubelet[2934]: I0707 00:11:46.779493 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-d-d476fda7c5" podStartSLOduration=1.779470716 podStartE2EDuration="1.779470716s" podCreationTimestamp="2025-07-07 00:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:11:46.769770036 +0000 UTC m=+1.225305199" watchObservedRunningTime="2025-07-07 00:11:46.779470716 +0000 UTC m=+1.235005879" Jul 7 00:11:46.781292 kubelet[2934]: I0707 00:11:46.781183 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-d-d476fda7c5" podStartSLOduration=3.7811742710000003 podStartE2EDuration="3.781174271s" podCreationTimestamp="2025-07-07 00:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:11:46.780263673 +0000 UTC m=+1.235798837" watchObservedRunningTime="2025-07-07 00:11:46.781174271 +0000 UTC m=+1.236709444" Jul 7 00:11:47.882450 systemd[1]: Started sshd@8-157.180.40.234:22-14.103.116.0:48924.service - OpenSSH per-connection server daemon (14.103.116.0:48924). Jul 7 00:11:50.634794 kubelet[2934]: I0707 00:11:50.634713 2934 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:11:50.635615 kubelet[2934]: I0707 00:11:50.635203 2934 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:11:50.635779 containerd[1622]: time="2025-07-07T00:11:50.635037142Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:11:51.488372 kubelet[2934]: I0707 00:11:51.488313 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/360ea37f-7843-4674-b415-181b9d86f8b9-kube-proxy\") pod \"kube-proxy-4htsj\" (UID: \"360ea37f-7843-4674-b415-181b9d86f8b9\") " pod="kube-system/kube-proxy-4htsj" Jul 7 00:11:51.488572 kubelet[2934]: I0707 00:11:51.488493 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/360ea37f-7843-4674-b415-181b9d86f8b9-xtables-lock\") pod \"kube-proxy-4htsj\" (UID: \"360ea37f-7843-4674-b415-181b9d86f8b9\") " pod="kube-system/kube-proxy-4htsj" Jul 7 00:11:51.488572 kubelet[2934]: I0707 00:11:51.488531 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/360ea37f-7843-4674-b415-181b9d86f8b9-lib-modules\") pod \"kube-proxy-4htsj\" (UID: \"360ea37f-7843-4674-b415-181b9d86f8b9\") " pod="kube-system/kube-proxy-4htsj" Jul 7 00:11:51.488646 kubelet[2934]: I0707 00:11:51.488588 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crltd\" (UniqueName: \"kubernetes.io/projected/360ea37f-7843-4674-b415-181b9d86f8b9-kube-api-access-crltd\") pod \"kube-proxy-4htsj\" (UID: \"360ea37f-7843-4674-b415-181b9d86f8b9\") " pod="kube-system/kube-proxy-4htsj" Jul 7 00:11:51.790489 kubelet[2934]: I0707 00:11:51.790369 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f67bda21-0947-4dc6-b736-2ca3083fd3cc-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-p7gdf\" (UID: \"f67bda21-0947-4dc6-b736-2ca3083fd3cc\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-p7gdf" Jul 7 00:11:51.790489 kubelet[2934]: I0707 00:11:51.790430 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52vr\" (UniqueName: \"kubernetes.io/projected/f67bda21-0947-4dc6-b736-2ca3083fd3cc-kube-api-access-c52vr\") pod \"tigera-operator-5bf8dfcb4-p7gdf\" (UID: \"f67bda21-0947-4dc6-b736-2ca3083fd3cc\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-p7gdf" Jul 7 00:11:51.799384 containerd[1622]: time="2025-07-07T00:11:51.799325852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4htsj,Uid:360ea37f-7843-4674-b415-181b9d86f8b9,Namespace:kube-system,Attempt:0,}" Jul 7 00:11:51.836015 containerd[1622]: time="2025-07-07T00:11:51.835876097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:51.836429 containerd[1622]: time="2025-07-07T00:11:51.836318796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:51.836429 containerd[1622]: time="2025-07-07T00:11:51.836377867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:51.837428 containerd[1622]: time="2025-07-07T00:11:51.837202183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:51.890588 containerd[1622]: time="2025-07-07T00:11:51.890511782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4htsj,Uid:360ea37f-7843-4674-b415-181b9d86f8b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fb4b037ce9a8c20e5489059377a53a3f5520d8e167d8a374d584d65761bfed1\"" Jul 7 00:11:51.894843 containerd[1622]: time="2025-07-07T00:11:51.894804039Z" level=info msg="CreateContainer within sandbox \"6fb4b037ce9a8c20e5489059377a53a3f5520d8e167d8a374d584d65761bfed1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:11:51.917343 containerd[1622]: time="2025-07-07T00:11:51.917256089Z" level=info msg="CreateContainer within sandbox \"6fb4b037ce9a8c20e5489059377a53a3f5520d8e167d8a374d584d65761bfed1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c417b920c5fe5a1281a6083245fce2cf18d928b857c61e8170c1a203122ded58\"" Jul 7 00:11:51.918435 containerd[1622]: time="2025-07-07T00:11:51.918370008Z" level=info msg="StartContainer for \"c417b920c5fe5a1281a6083245fce2cf18d928b857c61e8170c1a203122ded58\"" Jul 7 00:11:51.973132 containerd[1622]: time="2025-07-07T00:11:51.972007842Z" level=info msg="StartContainer for \"c417b920c5fe5a1281a6083245fce2cf18d928b857c61e8170c1a203122ded58\" returns successfully" Jul 7 00:11:52.035646 containerd[1622]: time="2025-07-07T00:11:52.035007952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-p7gdf,Uid:f67bda21-0947-4dc6-b736-2ca3083fd3cc,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:11:52.076332 containerd[1622]: time="2025-07-07T00:11:52.075309469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:52.076332 containerd[1622]: time="2025-07-07T00:11:52.075430878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:52.076332 containerd[1622]: time="2025-07-07T00:11:52.075489748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:52.077041 containerd[1622]: time="2025-07-07T00:11:52.076798532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:52.149499 containerd[1622]: time="2025-07-07T00:11:52.149448475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-p7gdf,Uid:f67bda21-0947-4dc6-b736-2ca3083fd3cc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5a8cc73f6ba116eab540254e9b932b3be294525f8de81fe5974185f1e647d061\"" Jul 7 00:11:52.152390 containerd[1622]: time="2025-07-07T00:11:52.152104886Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:11:52.775154 kubelet[2934]: I0707 00:11:52.775024 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4htsj" podStartSLOduration=1.774985185 podStartE2EDuration="1.774985185s" podCreationTimestamp="2025-07-07 00:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:11:52.760570036 +0000 UTC m=+7.216105230" watchObservedRunningTime="2025-07-07 00:11:52.774985185 +0000 UTC m=+7.230520358" Jul 7 00:11:53.833494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780168848.mount: Deactivated successfully. Jul 7 00:11:54.255952 containerd[1622]: time="2025-07-07T00:11:54.255840113Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:54.257354 containerd[1622]: time="2025-07-07T00:11:54.257321370Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:11:54.258576 containerd[1622]: time="2025-07-07T00:11:54.258537540Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:54.260804 containerd[1622]: time="2025-07-07T00:11:54.260787559Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:54.261772 containerd[1622]: time="2025-07-07T00:11:54.261339424Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.109182951s" Jul 7 00:11:54.261772 containerd[1622]: time="2025-07-07T00:11:54.261365974Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:11:54.263740 containerd[1622]: time="2025-07-07T00:11:54.263223998Z" level=info msg="CreateContainer within sandbox \"5a8cc73f6ba116eab540254e9b932b3be294525f8de81fe5974185f1e647d061\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:11:54.282719 containerd[1622]: time="2025-07-07T00:11:54.282644752Z" level=info msg="CreateContainer within sandbox \"5a8cc73f6ba116eab540254e9b932b3be294525f8de81fe5974185f1e647d061\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ad77b76c1cb620faa4d9160045f86c17a3d58dc75bbebf0af95a7485f8f75932\"" Jul 7 00:11:54.283434 containerd[1622]: time="2025-07-07T00:11:54.283317143Z" level=info msg="StartContainer for \"ad77b76c1cb620faa4d9160045f86c17a3d58dc75bbebf0af95a7485f8f75932\"" Jul 7 00:11:54.336633 containerd[1622]: time="2025-07-07T00:11:54.336584883Z" level=info msg="StartContainer for \"ad77b76c1cb620faa4d9160045f86c17a3d58dc75bbebf0af95a7485f8f75932\" returns successfully" Jul 7 00:11:54.750066 kubelet[2934]: I0707 00:11:54.749940 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-p7gdf" podStartSLOduration=1.6393605949999999 podStartE2EDuration="3.749902835s" podCreationTimestamp="2025-07-07 00:11:51 +0000 UTC" firstStartedPulling="2025-07-07 00:11:52.151520671 +0000 UTC m=+6.607055854" lastFinishedPulling="2025-07-07 00:11:54.26206293 +0000 UTC m=+8.717598094" observedRunningTime="2025-07-07 00:11:54.749381007 +0000 UTC m=+9.204916220" watchObservedRunningTime="2025-07-07 00:11:54.749902835 +0000 UTC m=+9.205438068" Jul 7 00:12:01.047868 sudo[2050]: pam_unix(sudo:session): session closed for user root Jul 7 00:12:01.218569 sshd[2046]: pam_unix(sshd:session): session closed for user core Jul 7 00:12:01.227053 systemd[1]: sshd@6-157.180.40.234:22-147.75.109.163:42890.service: Deactivated successfully. Jul 7 00:12:01.239460 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:12:01.244368 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:12:01.246941 systemd-logind[1604]: Removed session 7. Jul 7 00:12:04.275693 kubelet[2934]: I0707 00:12:04.275567 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/17021d4c-9dfb-4870-bc6e-b1ae5a87d3c1-typha-certs\") pod \"calico-typha-598d6986f8-5kbf4\" (UID: \"17021d4c-9dfb-4870-bc6e-b1ae5a87d3c1\") " pod="calico-system/calico-typha-598d6986f8-5kbf4" Jul 7 00:12:04.276637 kubelet[2934]: I0707 00:12:04.275706 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17021d4c-9dfb-4870-bc6e-b1ae5a87d3c1-tigera-ca-bundle\") pod \"calico-typha-598d6986f8-5kbf4\" (UID: \"17021d4c-9dfb-4870-bc6e-b1ae5a87d3c1\") " pod="calico-system/calico-typha-598d6986f8-5kbf4" Jul 7 00:12:04.276637 kubelet[2934]: I0707 00:12:04.275795 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph9tc\" (UniqueName: \"kubernetes.io/projected/17021d4c-9dfb-4870-bc6e-b1ae5a87d3c1-kube-api-access-ph9tc\") pod \"calico-typha-598d6986f8-5kbf4\" (UID: \"17021d4c-9dfb-4870-bc6e-b1ae5a87d3c1\") " pod="calico-system/calico-typha-598d6986f8-5kbf4" Jul 7 00:12:04.457430 containerd[1622]: time="2025-07-07T00:12:04.457132130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598d6986f8-5kbf4,Uid:17021d4c-9dfb-4870-bc6e-b1ae5a87d3c1,Namespace:calico-system,Attempt:0,}" Jul 7 00:12:04.511391 containerd[1622]: time="2025-07-07T00:12:04.510778988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:04.511547 containerd[1622]: time="2025-07-07T00:12:04.511406585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:04.511547 containerd[1622]: time="2025-07-07T00:12:04.511445930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:04.511794 containerd[1622]: time="2025-07-07T00:12:04.511683876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:04.616332 containerd[1622]: time="2025-07-07T00:12:04.615063041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598d6986f8-5kbf4,Uid:17021d4c-9dfb-4870-bc6e-b1ae5a87d3c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6576daf159c81591a8ef98af81870b60bb3b64ac5f18c8ee9bcc8e3efba4e7fc\"" Jul 7 00:12:04.618914 containerd[1622]: time="2025-07-07T00:12:04.618796582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:12:04.677956 kubelet[2934]: I0707 00:12:04.677892 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-node-certs\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.677956 kubelet[2934]: I0707 00:12:04.677938 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-tigera-ca-bundle\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.677956 kubelet[2934]: I0707 00:12:04.677959 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-var-lib-calico\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678161 kubelet[2934]: I0707 00:12:04.677978 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-policysync\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678161 kubelet[2934]: I0707 00:12:04.677996 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-cni-bin-dir\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678161 kubelet[2934]: I0707 00:12:04.678014 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-flexvol-driver-host\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678161 kubelet[2934]: I0707 00:12:04.678036 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbg7\" (UniqueName: \"kubernetes.io/projected/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-kube-api-access-8dbg7\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678161 kubelet[2934]: I0707 00:12:04.678055 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-cni-log-dir\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678309 kubelet[2934]: I0707 00:12:04.678071 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-cni-net-dir\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678309 kubelet[2934]: I0707 00:12:04.678092 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-var-run-calico\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678309 kubelet[2934]: I0707 00:12:04.678112 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-xtables-lock\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.678309 kubelet[2934]: I0707 00:12:04.678134 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2f84eb8-fd97-41c3-974b-6cf85ee05f20-lib-modules\") pod \"calico-node-cddmm\" (UID: \"b2f84eb8-fd97-41c3-974b-6cf85ee05f20\") " pod="calico-system/calico-node-cddmm" Jul 7 00:12:04.790681 kubelet[2934]: E0707 00:12:04.789328 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:04.796361 kubelet[2934]: E0707 00:12:04.796339 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.796648 kubelet[2934]: W0707 00:12:04.796550 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.797928 kubelet[2934]: E0707 00:12:04.797914 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.806268 kubelet[2934]: E0707 00:12:04.806234 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.806268 kubelet[2934]: W0707 00:12:04.806273 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.806434 kubelet[2934]: E0707 00:12:04.806306 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.864577 containerd[1622]: time="2025-07-07T00:12:04.864526880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cddmm,Uid:b2f84eb8-fd97-41c3-974b-6cf85ee05f20,Namespace:calico-system,Attempt:0,}" Jul 7 00:12:04.889139 kubelet[2934]: E0707 00:12:04.885262 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.889139 kubelet[2934]: W0707 00:12:04.885937 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.889139 kubelet[2934]: E0707 00:12:04.885978 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.889139 kubelet[2934]: I0707 00:12:04.886066 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/87d7b550-b844-4390-a08c-837789bc924f-registration-dir\") pod \"csi-node-driver-w2vh4\" (UID: \"87d7b550-b844-4390-a08c-837789bc924f\") " pod="calico-system/csi-node-driver-w2vh4" Jul 7 00:12:04.890244 kubelet[2934]: E0707 00:12:04.890011 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.890244 kubelet[2934]: W0707 00:12:04.890049 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.890244 kubelet[2934]: E0707 00:12:04.890083 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.890244 kubelet[2934]: I0707 00:12:04.890119 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87d7b550-b844-4390-a08c-837789bc924f-kubelet-dir\") pod \"csi-node-driver-w2vh4\" (UID: \"87d7b550-b844-4390-a08c-837789bc924f\") " pod="calico-system/csi-node-driver-w2vh4" Jul 7 00:12:04.890902 kubelet[2934]: E0707 00:12:04.890812 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.890902 kubelet[2934]: W0707 00:12:04.890831 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.891188 kubelet[2934]: E0707 00:12:04.891059 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.891188 kubelet[2934]: I0707 00:12:04.891098 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/87d7b550-b844-4390-a08c-837789bc924f-socket-dir\") pod \"csi-node-driver-w2vh4\" (UID: \"87d7b550-b844-4390-a08c-837789bc924f\") " pod="calico-system/csi-node-driver-w2vh4" Jul 7 00:12:04.891562 kubelet[2934]: E0707 00:12:04.891444 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.891562 kubelet[2934]: W0707 00:12:04.891469 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.891805 kubelet[2934]: E0707 00:12:04.891702 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.892137 kubelet[2934]: E0707 00:12:04.892017 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.892137 kubelet[2934]: W0707 00:12:04.892032 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.892137 kubelet[2934]: E0707 00:12:04.892051 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.892620 kubelet[2934]: E0707 00:12:04.892437 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.892620 kubelet[2934]: W0707 00:12:04.892452 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.892620 kubelet[2934]: E0707 00:12:04.892485 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.892620 kubelet[2934]: I0707 00:12:04.892513 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/87d7b550-b844-4390-a08c-837789bc924f-varrun\") pod \"csi-node-driver-w2vh4\" (UID: \"87d7b550-b844-4390-a08c-837789bc924f\") " pod="calico-system/csi-node-driver-w2vh4" Jul 7 00:12:04.893198 kubelet[2934]: E0707 00:12:04.893032 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.893198 kubelet[2934]: W0707 00:12:04.893050 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.893198 kubelet[2934]: E0707 00:12:04.893084 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.893693 kubelet[2934]: E0707 00:12:04.893490 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.893693 kubelet[2934]: W0707 00:12:04.893506 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.893693 kubelet[2934]: E0707 00:12:04.893522 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.894241 kubelet[2934]: E0707 00:12:04.894226 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.894478 kubelet[2934]: W0707 00:12:04.894320 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.894478 kubelet[2934]: E0707 00:12:04.894346 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.894478 kubelet[2934]: I0707 00:12:04.894375 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77dgq\" (UniqueName: \"kubernetes.io/projected/87d7b550-b844-4390-a08c-837789bc924f-kube-api-access-77dgq\") pod \"csi-node-driver-w2vh4\" (UID: \"87d7b550-b844-4390-a08c-837789bc924f\") " pod="calico-system/csi-node-driver-w2vh4" Jul 7 00:12:04.896033 kubelet[2934]: E0707 00:12:04.895886 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.896033 kubelet[2934]: W0707 00:12:04.895905 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.896033 kubelet[2934]: E0707 00:12:04.895926 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.896697 kubelet[2934]: E0707 00:12:04.896520 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.896697 kubelet[2934]: W0707 00:12:04.896535 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.896697 kubelet[2934]: E0707 00:12:04.896569 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.897319 kubelet[2934]: E0707 00:12:04.897111 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.897319 kubelet[2934]: W0707 00:12:04.897129 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.897319 kubelet[2934]: E0707 00:12:04.897167 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.897869 kubelet[2934]: E0707 00:12:04.897708 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.897869 kubelet[2934]: W0707 00:12:04.897725 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.897869 kubelet[2934]: E0707 00:12:04.897799 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.898455 kubelet[2934]: E0707 00:12:04.898308 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.898455 kubelet[2934]: W0707 00:12:04.898337 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.898455 kubelet[2934]: E0707 00:12:04.898353 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.898945 kubelet[2934]: E0707 00:12:04.898854 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:04.898945 kubelet[2934]: W0707 00:12:04.898886 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:04.898945 kubelet[2934]: E0707 00:12:04.898902 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:04.929867 containerd[1622]: time="2025-07-07T00:12:04.929433189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:04.929867 containerd[1622]: time="2025-07-07T00:12:04.929525733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:04.929867 containerd[1622]: time="2025-07-07T00:12:04.929555158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:04.929867 containerd[1622]: time="2025-07-07T00:12:04.929774179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:04.999063 kubelet[2934]: E0707 00:12:04.998912 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.000019 kubelet[2934]: W0707 00:12:04.999601 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.000019 kubelet[2934]: E0707 00:12:04.999634 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.001548 kubelet[2934]: E0707 00:12:05.001513 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.001548 kubelet[2934]: W0707 00:12:05.001528 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.002197 kubelet[2934]: E0707 00:12:05.001766 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.003215 kubelet[2934]: E0707 00:12:05.003132 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.003215 kubelet[2934]: W0707 00:12:05.003146 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.003215 kubelet[2934]: E0707 00:12:05.003168 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.005440 kubelet[2934]: E0707 00:12:05.004510 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.005440 kubelet[2934]: W0707 00:12:05.004520 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.005440 kubelet[2934]: E0707 00:12:05.004530 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.006422 kubelet[2934]: E0707 00:12:05.005995 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.006422 kubelet[2934]: W0707 00:12:05.006004 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.008280 kubelet[2934]: E0707 00:12:05.008007 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.008280 kubelet[2934]: W0707 00:12:05.008018 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.008511 kubelet[2934]: E0707 00:12:05.008503 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.008621 kubelet[2934]: W0707 00:12:05.008567 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.009417 kubelet[2934]: E0707 00:12:05.008933 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.009417 kubelet[2934]: E0707 00:12:05.008956 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.009417 kubelet[2934]: E0707 00:12:05.009009 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.010950 kubelet[2934]: E0707 00:12:05.010722 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.010950 kubelet[2934]: W0707 00:12:05.010744 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.010950 kubelet[2934]: E0707 00:12:05.010781 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.014125 kubelet[2934]: E0707 00:12:05.013941 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.014125 kubelet[2934]: W0707 00:12:05.013954 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.014844 kubelet[2934]: E0707 00:12:05.014611 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.017427 kubelet[2934]: E0707 00:12:05.017416 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.017427 kubelet[2934]: W0707 00:12:05.017459 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.017427 kubelet[2934]: E0707 00:12:05.017488 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.017944 kubelet[2934]: E0707 00:12:05.017850 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.017944 kubelet[2934]: W0707 00:12:05.017859 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.017944 kubelet[2934]: E0707 00:12:05.017886 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.019067 kubelet[2934]: E0707 00:12:05.019047 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.019067 kubelet[2934]: W0707 00:12:05.019056 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.020746 kubelet[2934]: E0707 00:12:05.020634 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.022555 kubelet[2934]: E0707 00:12:05.020997 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.022555 kubelet[2934]: W0707 00:12:05.022411 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.022555 kubelet[2934]: E0707 00:12:05.022474 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.024083 kubelet[2934]: E0707 00:12:05.024073 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.025601 kubelet[2934]: W0707 00:12:05.025417 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.025601 kubelet[2934]: E0707 00:12:05.025521 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.027207 kubelet[2934]: E0707 00:12:05.025883 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.027207 kubelet[2934]: W0707 00:12:05.025896 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.027207 kubelet[2934]: E0707 00:12:05.026679 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.027207 kubelet[2934]: E0707 00:12:05.026963 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.027207 kubelet[2934]: W0707 00:12:05.026973 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.027207 kubelet[2934]: E0707 00:12:05.027068 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.027207 kubelet[2934]: E0707 00:12:05.027163 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.027207 kubelet[2934]: W0707 00:12:05.027172 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.027444 kubelet[2934]: E0707 00:12:05.027394 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.028174 kubelet[2934]: E0707 00:12:05.028159 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.028257 kubelet[2934]: W0707 00:12:05.028176 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.030539 kubelet[2934]: E0707 00:12:05.028809 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.030539 kubelet[2934]: E0707 00:12:05.029531 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.030539 kubelet[2934]: W0707 00:12:05.029542 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.030798 kubelet[2934]: E0707 00:12:05.030782 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.030947 kubelet[2934]: E0707 00:12:05.030823 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.030947 kubelet[2934]: W0707 00:12:05.030890 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.031040 kubelet[2934]: E0707 00:12:05.030951 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.035085 kubelet[2934]: E0707 00:12:05.031694 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.035085 kubelet[2934]: W0707 00:12:05.031707 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.035085 kubelet[2934]: E0707 00:12:05.032422 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.035085 kubelet[2934]: E0707 00:12:05.032835 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.035085 kubelet[2934]: W0707 00:12:05.032848 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.035085 kubelet[2934]: E0707 00:12:05.033901 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.036727 kubelet[2934]: E0707 00:12:05.036694 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.036727 kubelet[2934]: W0707 00:12:05.036712 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.037400 kubelet[2934]: E0707 00:12:05.036867 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.038758 kubelet[2934]: E0707 00:12:05.038730 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.039014 kubelet[2934]: W0707 00:12:05.038938 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.039650 kubelet[2934]: E0707 00:12:05.039549 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.041852 kubelet[2934]: E0707 00:12:05.041223 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.041852 kubelet[2934]: W0707 00:12:05.041316 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.041852 kubelet[2934]: E0707 00:12:05.041349 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.051786 kubelet[2934]: E0707 00:12:05.051752 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:05.052427 kubelet[2934]: W0707 00:12:05.051779 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:05.052470 kubelet[2934]: E0707 00:12:05.052436 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:05.067751 containerd[1622]: time="2025-07-07T00:12:05.067572282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cddmm,Uid:b2f84eb8-fd97-41c3-974b-6cf85ee05f20,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c1a1aa04efaa64d167f610dd02d3b78a35cfa135fc911abb4f25d1e59dcd7ae\"" Jul 7 00:12:06.658210 kubelet[2934]: E0707 00:12:06.658120 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:07.915482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399504344.mount: Deactivated successfully. Jul 7 00:12:08.657687 kubelet[2934]: E0707 00:12:08.657602 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:09.163508 containerd[1622]: time="2025-07-07T00:12:09.163440784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:09.165065 containerd[1622]: time="2025-07-07T00:12:09.165016740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:12:09.166265 containerd[1622]: time="2025-07-07T00:12:09.166229012Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:09.169271 containerd[1622]: time="2025-07-07T00:12:09.169203099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:09.169872 containerd[1622]: time="2025-07-07T00:12:09.169836677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.551005339s" Jul 7 00:12:09.169918 containerd[1622]: time="2025-07-07T00:12:09.169877123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:12:09.175687 containerd[1622]: time="2025-07-07T00:12:09.171786853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:12:09.186431 containerd[1622]: time="2025-07-07T00:12:09.186379345Z" level=info msg="CreateContainer within sandbox \"6576daf159c81591a8ef98af81870b60bb3b64ac5f18c8ee9bcc8e3efba4e7fc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:12:09.206129 containerd[1622]: time="2025-07-07T00:12:09.206078961Z" level=info msg="CreateContainer within sandbox \"6576daf159c81591a8ef98af81870b60bb3b64ac5f18c8ee9bcc8e3efba4e7fc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3c75e6566119cf572237e212da6c44659048a1b971e0d3cd5ad351013f19ae70\"" Jul 7 00:12:09.207263 containerd[1622]: time="2025-07-07T00:12:09.206713501Z" level=info msg="StartContainer for \"3c75e6566119cf572237e212da6c44659048a1b971e0d3cd5ad351013f19ae70\"" Jul 7 00:12:09.283018 containerd[1622]: time="2025-07-07T00:12:09.282632769Z" level=info msg="StartContainer for \"3c75e6566119cf572237e212da6c44659048a1b971e0d3cd5ad351013f19ae70\" returns successfully" Jul 7 00:12:09.833123 kubelet[2934]: E0707 00:12:09.832910 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.833123 kubelet[2934]: W0707 00:12:09.832940 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.833123 kubelet[2934]: E0707 00:12:09.832968 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.834138 kubelet[2934]: E0707 00:12:09.833764 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.834138 kubelet[2934]: W0707 00:12:09.833780 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.834138 kubelet[2934]: E0707 00:12:09.833800 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.835379 kubelet[2934]: E0707 00:12:09.835267 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.835379 kubelet[2934]: W0707 00:12:09.835303 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.835379 kubelet[2934]: E0707 00:12:09.835338 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.835943 kubelet[2934]: E0707 00:12:09.835924 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.836164 kubelet[2934]: W0707 00:12:09.836045 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.836164 kubelet[2934]: E0707 00:12:09.836073 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.837292 kubelet[2934]: E0707 00:12:09.837241 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.837292 kubelet[2934]: W0707 00:12:09.837267 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.837438 kubelet[2934]: E0707 00:12:09.837312 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.837962 kubelet[2934]: E0707 00:12:09.837925 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.838045 kubelet[2934]: W0707 00:12:09.837997 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.838045 kubelet[2934]: E0707 00:12:09.838018 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.838446 kubelet[2934]: E0707 00:12:09.838418 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.838446 kubelet[2934]: W0707 00:12:09.838440 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.839569 kubelet[2934]: E0707 00:12:09.838458 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.839569 kubelet[2934]: E0707 00:12:09.839150 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.839569 kubelet[2934]: W0707 00:12:09.839219 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.839569 kubelet[2934]: E0707 00:12:09.839240 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.840508 kubelet[2934]: E0707 00:12:09.840128 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.840508 kubelet[2934]: W0707 00:12:09.840374 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.840508 kubelet[2934]: E0707 00:12:09.840456 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.841131 kubelet[2934]: E0707 00:12:09.840994 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.841131 kubelet[2934]: W0707 00:12:09.841024 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.841131 kubelet[2934]: E0707 00:12:09.841055 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.842144 kubelet[2934]: E0707 00:12:09.842102 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.842144 kubelet[2934]: W0707 00:12:09.842124 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.842144 kubelet[2934]: E0707 00:12:09.842142 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.843112 kubelet[2934]: E0707 00:12:09.842373 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.843112 kubelet[2934]: W0707 00:12:09.842387 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.843112 kubelet[2934]: E0707 00:12:09.842402 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.843112 kubelet[2934]: E0707 00:12:09.842728 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.843112 kubelet[2934]: W0707 00:12:09.842759 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.843112 kubelet[2934]: E0707 00:12:09.842777 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.843722 kubelet[2934]: E0707 00:12:09.843257 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.843722 kubelet[2934]: W0707 00:12:09.843274 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.843722 kubelet[2934]: E0707 00:12:09.843291 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.844554 kubelet[2934]: E0707 00:12:09.844507 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.844554 kubelet[2934]: W0707 00:12:09.844538 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.844554 kubelet[2934]: E0707 00:12:09.844561 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.848060 kubelet[2934]: E0707 00:12:09.848020 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.848060 kubelet[2934]: W0707 00:12:09.848046 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.848060 kubelet[2934]: E0707 00:12:09.848067 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.848610 kubelet[2934]: E0707 00:12:09.848521 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.848610 kubelet[2934]: W0707 00:12:09.848553 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.848610 kubelet[2934]: E0707 00:12:09.848586 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.849133 kubelet[2934]: E0707 00:12:09.849054 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.849133 kubelet[2934]: W0707 00:12:09.849068 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.849133 kubelet[2934]: E0707 00:12:09.849098 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.849547 kubelet[2934]: E0707 00:12:09.849430 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.849547 kubelet[2934]: W0707 00:12:09.849443 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.849547 kubelet[2934]: E0707 00:12:09.849467 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.850183 kubelet[2934]: E0707 00:12:09.849828 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.850183 kubelet[2934]: W0707 00:12:09.849841 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.850183 kubelet[2934]: E0707 00:12:09.849893 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.850553 kubelet[2934]: E0707 00:12:09.850241 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.850553 kubelet[2934]: W0707 00:12:09.850258 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.850553 kubelet[2934]: E0707 00:12:09.850357 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.850553 kubelet[2934]: E0707 00:12:09.850490 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.850553 kubelet[2934]: W0707 00:12:09.850499 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.851193 kubelet[2934]: E0707 00:12:09.850614 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.851193 kubelet[2934]: E0707 00:12:09.850808 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.851193 kubelet[2934]: W0707 00:12:09.850819 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.851193 kubelet[2934]: E0707 00:12:09.850851 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.851193 kubelet[2934]: E0707 00:12:09.851116 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.851193 kubelet[2934]: W0707 00:12:09.851128 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.851193 kubelet[2934]: E0707 00:12:09.851146 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.851710 kubelet[2934]: E0707 00:12:09.851526 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.851710 kubelet[2934]: W0707 00:12:09.851539 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.851710 kubelet[2934]: E0707 00:12:09.851632 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.852387 kubelet[2934]: E0707 00:12:09.852360 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.852387 kubelet[2934]: W0707 00:12:09.852374 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.852531 kubelet[2934]: E0707 00:12:09.852475 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.852717 kubelet[2934]: E0707 00:12:09.852643 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.852717 kubelet[2934]: W0707 00:12:09.852696 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.852867 kubelet[2934]: E0707 00:12:09.852807 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.853049 kubelet[2934]: E0707 00:12:09.853015 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.853049 kubelet[2934]: W0707 00:12:09.853034 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.853161 kubelet[2934]: E0707 00:12:09.853061 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.853461 kubelet[2934]: E0707 00:12:09.853415 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.853461 kubelet[2934]: W0707 00:12:09.853445 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.853652 kubelet[2934]: E0707 00:12:09.853479 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.854099 kubelet[2934]: E0707 00:12:09.854067 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.854099 kubelet[2934]: W0707 00:12:09.854091 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.854279 kubelet[2934]: E0707 00:12:09.854175 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.854540 kubelet[2934]: E0707 00:12:09.854500 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.854540 kubelet[2934]: W0707 00:12:09.854529 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.854634 kubelet[2934]: E0707 00:12:09.854554 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.854980 kubelet[2934]: E0707 00:12:09.854950 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.854980 kubelet[2934]: W0707 00:12:09.854970 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.855092 kubelet[2934]: E0707 00:12:09.854989 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:09.855648 kubelet[2934]: E0707 00:12:09.855615 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:09.855648 kubelet[2934]: W0707 00:12:09.855639 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:09.855648 kubelet[2934]: E0707 00:12:09.855690 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.017142 systemd[1]: Started sshd@9-157.180.40.234:22-122.176.122.24:60804.service - OpenSSH per-connection server daemon (122.176.122.24:60804). Jul 7 00:12:10.657994 kubelet[2934]: E0707 00:12:10.657455 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:10.819809 kubelet[2934]: I0707 00:12:10.819646 2934 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:12:10.856969 kubelet[2934]: E0707 00:12:10.856912 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.856969 kubelet[2934]: W0707 00:12:10.856946 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.856969 kubelet[2934]: E0707 00:12:10.856970 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.858103 kubelet[2934]: E0707 00:12:10.857242 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.858103 kubelet[2934]: W0707 00:12:10.857251 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.858103 kubelet[2934]: E0707 00:12:10.857259 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.858103 kubelet[2934]: E0707 00:12:10.857471 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.858103 kubelet[2934]: W0707 00:12:10.857479 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.858103 kubelet[2934]: E0707 00:12:10.857487 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.858103 kubelet[2934]: E0707 00:12:10.857644 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.858103 kubelet[2934]: W0707 00:12:10.857651 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.858103 kubelet[2934]: E0707 00:12:10.857694 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.858103 kubelet[2934]: E0707 00:12:10.857887 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.859194 kubelet[2934]: W0707 00:12:10.857894 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.859194 kubelet[2934]: E0707 00:12:10.857902 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.859194 kubelet[2934]: E0707 00:12:10.858031 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.859194 kubelet[2934]: W0707 00:12:10.858038 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.859194 kubelet[2934]: E0707 00:12:10.858044 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.859194 kubelet[2934]: E0707 00:12:10.858195 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.859194 kubelet[2934]: W0707 00:12:10.858203 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.859194 kubelet[2934]: E0707 00:12:10.858213 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.859194 kubelet[2934]: E0707 00:12:10.858374 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.859194 kubelet[2934]: W0707 00:12:10.858380 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.860091 kubelet[2934]: E0707 00:12:10.858387 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.860091 kubelet[2934]: E0707 00:12:10.858540 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.860091 kubelet[2934]: W0707 00:12:10.858546 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.860091 kubelet[2934]: E0707 00:12:10.858553 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.860091 kubelet[2934]: E0707 00:12:10.858673 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.860091 kubelet[2934]: W0707 00:12:10.858679 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.860091 kubelet[2934]: E0707 00:12:10.858686 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.860091 kubelet[2934]: E0707 00:12:10.858812 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.860091 kubelet[2934]: W0707 00:12:10.858817 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.860091 kubelet[2934]: E0707 00:12:10.858840 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.860565 kubelet[2934]: E0707 00:12:10.858958 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.860565 kubelet[2934]: W0707 00:12:10.858966 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.860565 kubelet[2934]: E0707 00:12:10.858972 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.860565 kubelet[2934]: E0707 00:12:10.859104 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.860565 kubelet[2934]: W0707 00:12:10.859111 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.860565 kubelet[2934]: E0707 00:12:10.859117 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.860565 kubelet[2934]: E0707 00:12:10.859306 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.860565 kubelet[2934]: W0707 00:12:10.859314 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.860565 kubelet[2934]: E0707 00:12:10.859321 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.860565 kubelet[2934]: E0707 00:12:10.859470 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.861026 kubelet[2934]: W0707 00:12:10.859476 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.861026 kubelet[2934]: E0707 00:12:10.859483 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.957825 kubelet[2934]: E0707 00:12:10.957597 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.957825 kubelet[2934]: W0707 00:12:10.957636 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.957825 kubelet[2934]: E0707 00:12:10.957734 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.959050 kubelet[2934]: E0707 00:12:10.958462 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.959050 kubelet[2934]: W0707 00:12:10.958486 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.959050 kubelet[2934]: E0707 00:12:10.958542 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.959050 kubelet[2934]: E0707 00:12:10.959025 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.959050 kubelet[2934]: W0707 00:12:10.959044 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.959312 kubelet[2934]: E0707 00:12:10.959070 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.960432 kubelet[2934]: E0707 00:12:10.959605 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.960432 kubelet[2934]: W0707 00:12:10.959622 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.960432 kubelet[2934]: E0707 00:12:10.959837 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.960432 kubelet[2934]: E0707 00:12:10.960053 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.960432 kubelet[2934]: W0707 00:12:10.960091 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.960432 kubelet[2934]: E0707 00:12:10.960286 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.961219 kubelet[2934]: E0707 00:12:10.960514 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.961219 kubelet[2934]: W0707 00:12:10.960530 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.961219 kubelet[2934]: E0707 00:12:10.960634 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.961219 kubelet[2934]: E0707 00:12:10.960933 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.961219 kubelet[2934]: W0707 00:12:10.960947 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.961219 kubelet[2934]: E0707 00:12:10.961004 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.961468 kubelet[2934]: E0707 00:12:10.961373 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.961468 kubelet[2934]: W0707 00:12:10.961390 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.961468 kubelet[2934]: E0707 00:12:10.961438 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.962698 kubelet[2934]: E0707 00:12:10.961909 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.962698 kubelet[2934]: W0707 00:12:10.961932 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.962698 kubelet[2934]: E0707 00:12:10.962056 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.962698 kubelet[2934]: E0707 00:12:10.962623 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.962698 kubelet[2934]: W0707 00:12:10.962637 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.962993 kubelet[2934]: E0707 00:12:10.962937 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.962993 kubelet[2934]: W0707 00:12:10.962951 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.963366 kubelet[2934]: E0707 00:12:10.963212 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.963366 kubelet[2934]: E0707 00:12:10.963258 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.963366 kubelet[2934]: E0707 00:12:10.963337 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.963366 kubelet[2934]: W0707 00:12:10.963349 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.963366 kubelet[2934]: E0707 00:12:10.963379 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.965267 kubelet[2934]: E0707 00:12:10.963683 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.965267 kubelet[2934]: W0707 00:12:10.963719 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.965267 kubelet[2934]: E0707 00:12:10.963764 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.965267 kubelet[2934]: E0707 00:12:10.964105 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.965267 kubelet[2934]: W0707 00:12:10.964124 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.965267 kubelet[2934]: E0707 00:12:10.964145 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.965267 kubelet[2934]: E0707 00:12:10.965083 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.965267 kubelet[2934]: W0707 00:12:10.965112 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.965772 kubelet[2934]: E0707 00:12:10.965392 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.966851 kubelet[2934]: E0707 00:12:10.966346 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.966851 kubelet[2934]: W0707 00:12:10.966370 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.966851 kubelet[2934]: E0707 00:12:10.966395 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.967220 kubelet[2934]: E0707 00:12:10.967157 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.967220 kubelet[2934]: W0707 00:12:10.967184 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.967461 kubelet[2934]: E0707 00:12:10.967400 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:10.967461 kubelet[2934]: E0707 00:12:10.967436 2934 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:12:10.967461 kubelet[2934]: W0707 00:12:10.967452 2934 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:12:10.967612 kubelet[2934]: E0707 00:12:10.967470 2934 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:12:11.130522 containerd[1622]: time="2025-07-07T00:12:11.130444783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:11.131751 containerd[1622]: time="2025-07-07T00:12:11.131709084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:12:11.134680 containerd[1622]: time="2025-07-07T00:12:11.133703403Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:11.136504 containerd[1622]: time="2025-07-07T00:12:11.136451165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:11.138006 containerd[1622]: time="2025-07-07T00:12:11.137964753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.966140389s" Jul 7 00:12:11.138066 containerd[1622]: time="2025-07-07T00:12:11.138022141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:12:11.141612 containerd[1622]: time="2025-07-07T00:12:11.141579561Z" level=info msg="CreateContainer within sandbox \"2c1a1aa04efaa64d167f610dd02d3b78a35cfa135fc911abb4f25d1e59dcd7ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:12:11.159914 containerd[1622]: time="2025-07-07T00:12:11.159859274Z" level=info msg="CreateContainer within sandbox \"2c1a1aa04efaa64d167f610dd02d3b78a35cfa135fc911abb4f25d1e59dcd7ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"175e852858f459a41d09f0ea552e80bd5a226160d141586c5f65a68b9516a95e\"" Jul 7 00:12:11.160899 containerd[1622]: time="2025-07-07T00:12:11.160800881Z" level=info msg="StartContainer for \"175e852858f459a41d09f0ea552e80bd5a226160d141586c5f65a68b9516a95e\"" Jul 7 00:12:11.232951 containerd[1622]: time="2025-07-07T00:12:11.231850437Z" level=info msg="StartContainer for \"175e852858f459a41d09f0ea552e80bd5a226160d141586c5f65a68b9516a95e\" returns successfully" Jul 7 00:12:11.263311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-175e852858f459a41d09f0ea552e80bd5a226160d141586c5f65a68b9516a95e-rootfs.mount: Deactivated successfully. Jul 7 00:12:11.375512 containerd[1622]: time="2025-07-07T00:12:11.350451394Z" level=info msg="shim disconnected" id=175e852858f459a41d09f0ea552e80bd5a226160d141586c5f65a68b9516a95e namespace=k8s.io Jul 7 00:12:11.375512 containerd[1622]: time="2025-07-07T00:12:11.375159241Z" level=warning msg="cleaning up after shim disconnected" id=175e852858f459a41d09f0ea552e80bd5a226160d141586c5f65a68b9516a95e namespace=k8s.io Jul 7 00:12:11.375512 containerd[1622]: time="2025-07-07T00:12:11.375181643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:12:11.418608 sshd[3552]: Received disconnect from 122.176.122.24 port 60804:11: Bye Bye [preauth] Jul 7 00:12:11.419358 sshd[3552]: Disconnected from authenticating user root 122.176.122.24 port 60804 [preauth] Jul 7 00:12:11.421943 systemd[1]: sshd@9-157.180.40.234:22-122.176.122.24:60804.service: Deactivated successfully. Jul 7 00:12:11.828715 containerd[1622]: time="2025-07-07T00:12:11.827162752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:12:11.861148 kubelet[2934]: I0707 00:12:11.859019 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-598d6986f8-5kbf4" podStartSLOduration=3.304127216 podStartE2EDuration="7.858950313s" podCreationTimestamp="2025-07-07 00:12:04 +0000 UTC" firstStartedPulling="2025-07-07 00:12:04.61612331 +0000 UTC m=+19.071658474" lastFinishedPulling="2025-07-07 00:12:09.170946408 +0000 UTC m=+23.626481571" observedRunningTime="2025-07-07 00:12:09.828860623 +0000 UTC m=+24.284395786" watchObservedRunningTime="2025-07-07 00:12:11.858950313 +0000 UTC m=+26.314485507" Jul 7 00:12:12.658139 kubelet[2934]: E0707 00:12:12.658044 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:14.659104 kubelet[2934]: E0707 00:12:14.658027 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:16.657799 kubelet[2934]: E0707 00:12:16.657693 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:16.812958 containerd[1622]: time="2025-07-07T00:12:16.812897672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:16.814618 containerd[1622]: time="2025-07-07T00:12:16.814568284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:12:16.816997 containerd[1622]: time="2025-07-07T00:12:16.816937066Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:16.820868 containerd[1622]: time="2025-07-07T00:12:16.820807824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:16.823257 containerd[1622]: time="2025-07-07T00:12:16.822517409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.99529227s" Jul 7 00:12:16.823257 containerd[1622]: time="2025-07-07T00:12:16.822610343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:12:16.827279 containerd[1622]: time="2025-07-07T00:12:16.827206160Z" level=info msg="CreateContainer within sandbox \"2c1a1aa04efaa64d167f610dd02d3b78a35cfa135fc911abb4f25d1e59dcd7ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:12:16.865365 containerd[1622]: time="2025-07-07T00:12:16.865288232Z" level=info msg="CreateContainer within sandbox \"2c1a1aa04efaa64d167f610dd02d3b78a35cfa135fc911abb4f25d1e59dcd7ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"474bf2010536051aad2052af6575ec95b0b57bc65f1cec9f495af123af7f2354\"" Jul 7 00:12:16.866467 containerd[1622]: time="2025-07-07T00:12:16.866351517Z" level=info msg="StartContainer for \"474bf2010536051aad2052af6575ec95b0b57bc65f1cec9f495af123af7f2354\"" Jul 7 00:12:16.949735 systemd[1]: run-containerd-runc-k8s.io-474bf2010536051aad2052af6575ec95b0b57bc65f1cec9f495af123af7f2354-runc.VcFHOM.mount: Deactivated successfully. Jul 7 00:12:16.983387 containerd[1622]: time="2025-07-07T00:12:16.983186980Z" level=info msg="StartContainer for \"474bf2010536051aad2052af6575ec95b0b57bc65f1cec9f495af123af7f2354\" returns successfully" Jul 7 00:12:17.682013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-474bf2010536051aad2052af6575ec95b0b57bc65f1cec9f495af123af7f2354-rootfs.mount: Deactivated successfully. Jul 7 00:12:17.685683 containerd[1622]: time="2025-07-07T00:12:17.685549682Z" level=info msg="shim disconnected" id=474bf2010536051aad2052af6575ec95b0b57bc65f1cec9f495af123af7f2354 namespace=k8s.io Jul 7 00:12:17.685683 containerd[1622]: time="2025-07-07T00:12:17.685642326Z" level=warning msg="cleaning up after shim disconnected" id=474bf2010536051aad2052af6575ec95b0b57bc65f1cec9f495af123af7f2354 namespace=k8s.io Jul 7 00:12:17.686946 containerd[1622]: time="2025-07-07T00:12:17.685651784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:12:17.727651 kubelet[2934]: I0707 00:12:17.727585 2934 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 00:12:17.848569 containerd[1622]: time="2025-07-07T00:12:17.848098267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:12:17.917425 kubelet[2934]: I0707 00:12:17.916961 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6e879d5-3fea-4271-b519-f5824551a918-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-98rfd\" (UID: \"d6e879d5-3fea-4271-b519-f5824551a918\") " pod="calico-system/goldmane-58fd7646b9-98rfd" Jul 7 00:12:17.917425 kubelet[2934]: I0707 00:12:17.917034 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-whisker-ca-bundle\") pod \"whisker-546c4689d-8cdxp\" (UID: \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\") " pod="calico-system/whisker-546c4689d-8cdxp" Jul 7 00:12:17.917425 kubelet[2934]: I0707 00:12:17.917062 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9zts\" (UniqueName: \"kubernetes.io/projected/1b44121e-b0a3-4592-b84f-01e54dbb20d5-kube-api-access-d9zts\") pod \"coredns-7c65d6cfc9-tz5tm\" (UID: \"1b44121e-b0a3-4592-b84f-01e54dbb20d5\") " pod="kube-system/coredns-7c65d6cfc9-tz5tm" Jul 7 00:12:17.917425 kubelet[2934]: I0707 00:12:17.917088 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6e879d5-3fea-4271-b519-f5824551a918-config\") pod \"goldmane-58fd7646b9-98rfd\" (UID: \"d6e879d5-3fea-4271-b519-f5824551a918\") " pod="calico-system/goldmane-58fd7646b9-98rfd" Jul 7 00:12:17.917425 kubelet[2934]: I0707 00:12:17.917116 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-whisker-backend-key-pair\") pod \"whisker-546c4689d-8cdxp\" (UID: \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\") " pod="calico-system/whisker-546c4689d-8cdxp" Jul 7 00:12:17.917717 kubelet[2934]: I0707 00:12:17.917144 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3c9458d-3506-4ae4-801e-b2be3c20a3f6-config-volume\") pod \"coredns-7c65d6cfc9-b8gzw\" (UID: \"e3c9458d-3506-4ae4-801e-b2be3c20a3f6\") " pod="kube-system/coredns-7c65d6cfc9-b8gzw" Jul 7 00:12:17.917717 kubelet[2934]: I0707 00:12:17.917168 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9w7g\" (UniqueName: \"kubernetes.io/projected/e3c9458d-3506-4ae4-801e-b2be3c20a3f6-kube-api-access-f9w7g\") pod \"coredns-7c65d6cfc9-b8gzw\" (UID: \"e3c9458d-3506-4ae4-801e-b2be3c20a3f6\") " pod="kube-system/coredns-7c65d6cfc9-b8gzw" Jul 7 00:12:17.917717 kubelet[2934]: I0707 00:12:17.917192 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrcdd\" (UniqueName: \"kubernetes.io/projected/d6e879d5-3fea-4271-b519-f5824551a918-kube-api-access-vrcdd\") pod \"goldmane-58fd7646b9-98rfd\" (UID: \"d6e879d5-3fea-4271-b519-f5824551a918\") " pod="calico-system/goldmane-58fd7646b9-98rfd" Jul 7 00:12:17.917717 kubelet[2934]: I0707 00:12:17.917218 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90e1f2ac-ab1d-4006-a494-a932bbfb41c8-calico-apiserver-certs\") pod \"calico-apiserver-768db8c477-gvl7v\" (UID: \"90e1f2ac-ab1d-4006-a494-a932bbfb41c8\") " pod="calico-apiserver/calico-apiserver-768db8c477-gvl7v" Jul 7 00:12:17.917717 kubelet[2934]: I0707 00:12:17.917243 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2qq7\" (UniqueName: \"kubernetes.io/projected/90e1f2ac-ab1d-4006-a494-a932bbfb41c8-kube-api-access-c2qq7\") pod \"calico-apiserver-768db8c477-gvl7v\" (UID: \"90e1f2ac-ab1d-4006-a494-a932bbfb41c8\") " pod="calico-apiserver/calico-apiserver-768db8c477-gvl7v" Jul 7 00:12:17.918764 kubelet[2934]: I0707 00:12:17.917272 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n554\" (UniqueName: \"kubernetes.io/projected/91dc6538-4119-44ef-989f-134b70575b3d-kube-api-access-5n554\") pod \"calico-kube-controllers-65849599bc-hq8xg\" (UID: \"91dc6538-4119-44ef-989f-134b70575b3d\") " pod="calico-system/calico-kube-controllers-65849599bc-hq8xg" Jul 7 00:12:17.918764 kubelet[2934]: I0707 00:12:17.917305 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9rq\" (UniqueName: \"kubernetes.io/projected/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-kube-api-access-qr9rq\") pod \"whisker-546c4689d-8cdxp\" (UID: \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\") " pod="calico-system/whisker-546c4689d-8cdxp" Jul 7 00:12:17.918764 kubelet[2934]: I0707 00:12:17.917332 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e655fc02-7710-4160-9f19-efce05fe6db5-calico-apiserver-certs\") pod \"calico-apiserver-768db8c477-p9bbs\" (UID: \"e655fc02-7710-4160-9f19-efce05fe6db5\") " pod="calico-apiserver/calico-apiserver-768db8c477-p9bbs" Jul 7 00:12:17.918764 kubelet[2934]: I0707 00:12:17.917360 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29hk8\" (UniqueName: \"kubernetes.io/projected/e655fc02-7710-4160-9f19-efce05fe6db5-kube-api-access-29hk8\") pod \"calico-apiserver-768db8c477-p9bbs\" (UID: \"e655fc02-7710-4160-9f19-efce05fe6db5\") " pod="calico-apiserver/calico-apiserver-768db8c477-p9bbs" Jul 7 00:12:17.918764 kubelet[2934]: I0707 00:12:17.917384 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b44121e-b0a3-4592-b84f-01e54dbb20d5-config-volume\") pod \"coredns-7c65d6cfc9-tz5tm\" (UID: \"1b44121e-b0a3-4592-b84f-01e54dbb20d5\") " pod="kube-system/coredns-7c65d6cfc9-tz5tm" Jul 7 00:12:17.918880 kubelet[2934]: I0707 00:12:17.917409 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91dc6538-4119-44ef-989f-134b70575b3d-tigera-ca-bundle\") pod \"calico-kube-controllers-65849599bc-hq8xg\" (UID: \"91dc6538-4119-44ef-989f-134b70575b3d\") " pod="calico-system/calico-kube-controllers-65849599bc-hq8xg" Jul 7 00:12:17.918880 kubelet[2934]: I0707 00:12:17.917433 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d6e879d5-3fea-4271-b519-f5824551a918-goldmane-key-pair\") pod \"goldmane-58fd7646b9-98rfd\" (UID: \"d6e879d5-3fea-4271-b519-f5824551a918\") " pod="calico-system/goldmane-58fd7646b9-98rfd" Jul 7 00:12:18.100181 containerd[1622]: time="2025-07-07T00:12:18.099167956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b8gzw,Uid:e3c9458d-3506-4ae4-801e-b2be3c20a3f6,Namespace:kube-system,Attempt:0,}" Jul 7 00:12:18.103366 containerd[1622]: time="2025-07-07T00:12:18.103244700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768db8c477-p9bbs,Uid:e655fc02-7710-4160-9f19-efce05fe6db5,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:12:18.104104 containerd[1622]: time="2025-07-07T00:12:18.104080436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-546c4689d-8cdxp,Uid:df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4,Namespace:calico-system,Attempt:0,}" Jul 7 00:12:18.383038 containerd[1622]: time="2025-07-07T00:12:18.382909413Z" level=error msg="Failed to destroy network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.391809 containerd[1622]: time="2025-07-07T00:12:18.391766009Z" level=error msg="encountered an error cleaning up failed sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.397630 containerd[1622]: time="2025-07-07T00:12:18.396496007Z" level=error msg="Failed to destroy network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.398516 containerd[1622]: time="2025-07-07T00:12:18.398174273Z" level=error msg="encountered an error cleaning up failed sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.404479 containerd[1622]: time="2025-07-07T00:12:18.404454248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-546c4689d-8cdxp,Uid:df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.424764 containerd[1622]: time="2025-07-07T00:12:18.424708696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768db8c477-gvl7v,Uid:90e1f2ac-ab1d-4006-a494-a932bbfb41c8,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:12:18.427420 kubelet[2934]: E0707 00:12:18.427366 2934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.427696 kubelet[2934]: E0707 00:12:18.427598 2934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-546c4689d-8cdxp" Jul 7 00:12:18.427696 kubelet[2934]: E0707 00:12:18.427641 2934 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-546c4689d-8cdxp" Jul 7 00:12:18.429025 kubelet[2934]: E0707 00:12:18.428191 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-546c4689d-8cdxp_calico-system(df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-546c4689d-8cdxp_calico-system(df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-546c4689d-8cdxp" podUID="df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4" Jul 7 00:12:18.429123 containerd[1622]: time="2025-07-07T00:12:18.427956194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b8gzw,Uid:e3c9458d-3506-4ae4-801e-b2be3c20a3f6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.429123 containerd[1622]: time="2025-07-07T00:12:18.428453437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz5tm,Uid:1b44121e-b0a3-4592-b84f-01e54dbb20d5,Namespace:kube-system,Attempt:0,}" Jul 7 00:12:18.429123 containerd[1622]: time="2025-07-07T00:12:18.428759681Z" level=error msg="Failed to destroy network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.429471 containerd[1622]: time="2025-07-07T00:12:18.429450627Z" level=error msg="encountered an error cleaning up failed sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.429593 containerd[1622]: time="2025-07-07T00:12:18.429575971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768db8c477-p9bbs,Uid:e655fc02-7710-4160-9f19-efce05fe6db5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.429826 kubelet[2934]: E0707 00:12:18.429771 2934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.429872 kubelet[2934]: E0707 00:12:18.429850 2934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b8gzw" Jul 7 00:12:18.429898 kubelet[2934]: E0707 00:12:18.429876 2934 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b8gzw" Jul 7 00:12:18.429924 kubelet[2934]: E0707 00:12:18.429907 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-b8gzw_kube-system(e3c9458d-3506-4ae4-801e-b2be3c20a3f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-b8gzw_kube-system(e3c9458d-3506-4ae4-801e-b2be3c20a3f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b8gzw" podUID="e3c9458d-3506-4ae4-801e-b2be3c20a3f6" Jul 7 00:12:18.430298 kubelet[2934]: E0707 00:12:18.429971 2934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.430298 kubelet[2934]: E0707 00:12:18.429997 2934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768db8c477-p9bbs" Jul 7 00:12:18.430298 kubelet[2934]: E0707 00:12:18.430010 2934 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768db8c477-p9bbs" Jul 7 00:12:18.430433 kubelet[2934]: E0707 00:12:18.430034 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768db8c477-p9bbs_calico-apiserver(e655fc02-7710-4160-9f19-efce05fe6db5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768db8c477-p9bbs_calico-apiserver(e655fc02-7710-4160-9f19-efce05fe6db5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768db8c477-p9bbs" podUID="e655fc02-7710-4160-9f19-efce05fe6db5" Jul 7 00:12:18.430568 containerd[1622]: time="2025-07-07T00:12:18.430550529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-98rfd,Uid:d6e879d5-3fea-4271-b519-f5824551a918,Namespace:calico-system,Attempt:0,}" Jul 7 00:12:18.431860 containerd[1622]: time="2025-07-07T00:12:18.431812635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65849599bc-hq8xg,Uid:91dc6538-4119-44ef-989f-134b70575b3d,Namespace:calico-system,Attempt:0,}" Jul 7 00:12:18.605329 containerd[1622]: time="2025-07-07T00:12:18.604809860Z" level=error msg="Failed to destroy network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.605329 containerd[1622]: time="2025-07-07T00:12:18.605230349Z" level=error msg="encountered an error cleaning up failed sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.605329 containerd[1622]: time="2025-07-07T00:12:18.605301081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768db8c477-gvl7v,Uid:90e1f2ac-ab1d-4006-a494-a932bbfb41c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.605678 kubelet[2934]: E0707 00:12:18.605583 2934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.605884 kubelet[2934]: E0707 00:12:18.605650 2934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768db8c477-gvl7v" Jul 7 00:12:18.605884 kubelet[2934]: E0707 00:12:18.605877 2934 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768db8c477-gvl7v" Jul 7 00:12:18.605975 kubelet[2934]: E0707 00:12:18.605927 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768db8c477-gvl7v_calico-apiserver(90e1f2ac-ab1d-4006-a494-a932bbfb41c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768db8c477-gvl7v_calico-apiserver(90e1f2ac-ab1d-4006-a494-a932bbfb41c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768db8c477-gvl7v" podUID="90e1f2ac-ab1d-4006-a494-a932bbfb41c8" Jul 7 00:12:18.629962 containerd[1622]: time="2025-07-07T00:12:18.629876689Z" level=error msg="Failed to destroy network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.630362 containerd[1622]: time="2025-07-07T00:12:18.630324388Z" level=error msg="encountered an error cleaning up failed sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.630415 containerd[1622]: time="2025-07-07T00:12:18.630391785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-98rfd,Uid:d6e879d5-3fea-4271-b519-f5824551a918,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.630819 kubelet[2934]: E0707 00:12:18.630731 2934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.631769 kubelet[2934]: E0707 00:12:18.630846 2934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-98rfd" Jul 7 00:12:18.631769 kubelet[2934]: E0707 00:12:18.630875 2934 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-98rfd" Jul 7 00:12:18.631769 kubelet[2934]: E0707 00:12:18.630928 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-98rfd_calico-system(d6e879d5-3fea-4271-b519-f5824551a918)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-98rfd_calico-system(d6e879d5-3fea-4271-b519-f5824551a918)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-98rfd" podUID="d6e879d5-3fea-4271-b519-f5824551a918" Jul 7 00:12:18.650606 containerd[1622]: time="2025-07-07T00:12:18.649445370Z" level=error msg="Failed to destroy network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.650606 containerd[1622]: time="2025-07-07T00:12:18.649699446Z" level=error msg="Failed to destroy network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.650606 containerd[1622]: time="2025-07-07T00:12:18.650006343Z" level=error msg="encountered an error cleaning up failed sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.650606 containerd[1622]: time="2025-07-07T00:12:18.650067537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz5tm,Uid:1b44121e-b0a3-4592-b84f-01e54dbb20d5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.650995 kubelet[2934]: E0707 00:12:18.650343 2934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.650995 kubelet[2934]: E0707 00:12:18.650409 2934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tz5tm" Jul 7 00:12:18.650995 kubelet[2934]: E0707 00:12:18.650429 2934 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tz5tm" Jul 7 00:12:18.651084 containerd[1622]: time="2025-07-07T00:12:18.650882354Z" level=error msg="encountered an error cleaning up failed sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.651084 containerd[1622]: time="2025-07-07T00:12:18.650913292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65849599bc-hq8xg,Uid:91dc6538-4119-44ef-989f-134b70575b3d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.651170 kubelet[2934]: E0707 00:12:18.650474 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tz5tm_kube-system(1b44121e-b0a3-4592-b84f-01e54dbb20d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tz5tm_kube-system(1b44121e-b0a3-4592-b84f-01e54dbb20d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tz5tm" podUID="1b44121e-b0a3-4592-b84f-01e54dbb20d5" Jul 7 00:12:18.653269 kubelet[2934]: E0707 00:12:18.653231 2934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.653329 kubelet[2934]: E0707 00:12:18.653272 2934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65849599bc-hq8xg" Jul 7 00:12:18.653329 kubelet[2934]: E0707 00:12:18.653289 2934 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65849599bc-hq8xg" Jul 7 00:12:18.654181 kubelet[2934]: E0707 00:12:18.653324 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65849599bc-hq8xg_calico-system(91dc6538-4119-44ef-989f-134b70575b3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65849599bc-hq8xg_calico-system(91dc6538-4119-44ef-989f-134b70575b3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65849599bc-hq8xg" podUID="91dc6538-4119-44ef-989f-134b70575b3d" Jul 7 00:12:18.659685 containerd[1622]: time="2025-07-07T00:12:18.659630868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2vh4,Uid:87d7b550-b844-4390-a08c-837789bc924f,Namespace:calico-system,Attempt:0,}" Jul 7 00:12:18.737211 containerd[1622]: time="2025-07-07T00:12:18.737118554Z" level=error msg="Failed to destroy network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.737780 containerd[1622]: time="2025-07-07T00:12:18.737527442Z" level=error msg="encountered an error cleaning up failed sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.737780 containerd[1622]: time="2025-07-07T00:12:18.737624894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2vh4,Uid:87d7b550-b844-4390-a08c-837789bc924f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.738149 kubelet[2934]: E0707 00:12:18.737939 2934 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:18.738149 kubelet[2934]: E0707 00:12:18.738004 2934 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2vh4" Jul 7 00:12:18.738149 kubelet[2934]: E0707 00:12:18.738033 2934 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2vh4" Jul 7 00:12:18.738631 kubelet[2934]: E0707 00:12:18.738074 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2vh4_calico-system(87d7b550-b844-4390-a08c-837789bc924f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2vh4_calico-system(87d7b550-b844-4390-a08c-837789bc924f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:18.854447 kubelet[2934]: I0707 00:12:18.854385 2934 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:18.861910 kubelet[2934]: I0707 00:12:18.861864 2934 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:18.899004 kubelet[2934]: I0707 00:12:18.898314 2934 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:18.904480 kubelet[2934]: I0707 00:12:18.903014 2934 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:18.905591 kubelet[2934]: I0707 00:12:18.905127 2934 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:18.907270 kubelet[2934]: I0707 00:12:18.907257 2934 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:18.909355 kubelet[2934]: I0707 00:12:18.909344 2934 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:18.911233 kubelet[2934]: I0707 00:12:18.911222 2934 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:18.924542 containerd[1622]: time="2025-07-07T00:12:18.923466192Z" level=info msg="StopPodSandbox for \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\"" Jul 7 00:12:18.927906 containerd[1622]: time="2025-07-07T00:12:18.927854199Z" level=info msg="Ensure that sandbox 333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901 in task-service has been cleanup successfully" Jul 7 00:12:18.930420 containerd[1622]: time="2025-07-07T00:12:18.930375937Z" level=info msg="StopPodSandbox for \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\"" Jul 7 00:12:18.930617 containerd[1622]: time="2025-07-07T00:12:18.930603403Z" level=info msg="StopPodSandbox for \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\"" Jul 7 00:12:18.930959 containerd[1622]: time="2025-07-07T00:12:18.930942900Z" level=info msg="Ensure that sandbox 7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771 in task-service has been cleanup successfully" Jul 7 00:12:18.935913 containerd[1622]: time="2025-07-07T00:12:18.935889495Z" level=info msg="StopPodSandbox for \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\"" Jul 7 00:12:18.936127 containerd[1622]: time="2025-07-07T00:12:18.936113976Z" level=info msg="Ensure that sandbox 9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df in task-service has been cleanup successfully" Jul 7 00:12:18.936598 containerd[1622]: time="2025-07-07T00:12:18.936340531Z" level=info msg="StopPodSandbox for \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\"" Jul 7 00:12:18.936598 containerd[1622]: time="2025-07-07T00:12:18.936459193Z" level=info msg="Ensure that sandbox 2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649 in task-service has been cleanup successfully" Jul 7 00:12:18.937228 containerd[1622]: time="2025-07-07T00:12:18.936927461Z" level=info msg="StopPodSandbox for \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\"" Jul 7 00:12:18.937228 containerd[1622]: time="2025-07-07T00:12:18.937069066Z" level=info msg="Ensure that sandbox 2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518 in task-service has been cleanup successfully" Jul 7 00:12:18.937531 containerd[1622]: time="2025-07-07T00:12:18.937478785Z" level=info msg="Ensure that sandbox 4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a in task-service has been cleanup successfully" Jul 7 00:12:18.939053 containerd[1622]: time="2025-07-07T00:12:18.939035333Z" level=info msg="StopPodSandbox for \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\"" Jul 7 00:12:18.939691 containerd[1622]: time="2025-07-07T00:12:18.939450832Z" level=info msg="Ensure that sandbox 04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf in task-service has been cleanup successfully" Jul 7 00:12:18.943092 containerd[1622]: time="2025-07-07T00:12:18.939954696Z" level=info msg="StopPodSandbox for \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\"" Jul 7 00:12:18.943092 containerd[1622]: time="2025-07-07T00:12:18.942896953Z" level=info msg="Ensure that sandbox 3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0 in task-service has been cleanup successfully" Jul 7 00:12:19.017600 containerd[1622]: time="2025-07-07T00:12:19.017408961Z" level=error msg="StopPodSandbox for \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\" failed" error="failed to destroy network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:19.018065 containerd[1622]: time="2025-07-07T00:12:19.018013345Z" level=error msg="StopPodSandbox for \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\" failed" error="failed to destroy network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:19.018112 kubelet[2934]: E0707 00:12:19.018056 2934 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:19.018468 kubelet[2934]: E0707 00:12:19.018369 2934 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:19.027858 kubelet[2934]: E0707 00:12:19.018404 2934 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771"} Jul 7 00:12:19.027858 kubelet[2934]: E0707 00:12:19.027844 2934 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e655fc02-7710-4160-9f19-efce05fe6db5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:12:19.028863 kubelet[2934]: E0707 00:12:19.027883 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e655fc02-7710-4160-9f19-efce05fe6db5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768db8c477-p9bbs" podUID="e655fc02-7710-4160-9f19-efce05fe6db5" Jul 7 00:12:19.028863 kubelet[2934]: E0707 00:12:19.018148 2934 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf"} Jul 7 00:12:19.028863 kubelet[2934]: E0707 00:12:19.027952 2934 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6e879d5-3fea-4271-b519-f5824551a918\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:12:19.028863 kubelet[2934]: E0707 00:12:19.027976 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6e879d5-3fea-4271-b519-f5824551a918\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-98rfd" podUID="d6e879d5-3fea-4271-b519-f5824551a918" Jul 7 00:12:19.055721 containerd[1622]: time="2025-07-07T00:12:19.053847052Z" level=error msg="StopPodSandbox for \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\" failed" error="failed to destroy network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:19.055875 kubelet[2934]: E0707 00:12:19.054225 2934 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:19.055875 kubelet[2934]: E0707 00:12:19.054305 2934 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649"} Jul 7 00:12:19.055875 kubelet[2934]: E0707 00:12:19.054359 2934 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b44121e-b0a3-4592-b84f-01e54dbb20d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:12:19.055875 kubelet[2934]: E0707 00:12:19.054389 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b44121e-b0a3-4592-b84f-01e54dbb20d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tz5tm" podUID="1b44121e-b0a3-4592-b84f-01e54dbb20d5" Jul 7 00:12:19.079835 containerd[1622]: time="2025-07-07T00:12:19.079737204Z" level=error msg="StopPodSandbox for \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\" failed" error="failed to destroy network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:19.080240 kubelet[2934]: E0707 00:12:19.080177 2934 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:19.080338 kubelet[2934]: E0707 00:12:19.080269 2934 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a"} Jul 7 00:12:19.080400 kubelet[2934]: E0707 00:12:19.080367 2934 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:12:19.080471 kubelet[2934]: E0707 00:12:19.080423 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-546c4689d-8cdxp" podUID="df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4" Jul 7 00:12:19.082437 containerd[1622]: time="2025-07-07T00:12:19.082260026Z" level=error msg="StopPodSandbox for \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\" failed" error="failed to destroy network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:19.082492 kubelet[2934]: E0707 00:12:19.082379 2934 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:19.082492 kubelet[2934]: E0707 00:12:19.082402 2934 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901"} Jul 7 00:12:19.082492 kubelet[2934]: E0707 00:12:19.082423 2934 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90e1f2ac-ab1d-4006-a494-a932bbfb41c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:12:19.082492 kubelet[2934]: E0707 00:12:19.082446 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90e1f2ac-ab1d-4006-a494-a932bbfb41c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768db8c477-gvl7v" podUID="90e1f2ac-ab1d-4006-a494-a932bbfb41c8" Jul 7 00:12:19.086152 containerd[1622]: time="2025-07-07T00:12:19.086067794Z" level=error msg="StopPodSandbox for \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\" failed" error="failed to destroy network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:19.086401 kubelet[2934]: E0707 00:12:19.086336 2934 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:19.086401 kubelet[2934]: E0707 00:12:19.086374 2934 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df"} Jul 7 00:12:19.086401 kubelet[2934]: E0707 00:12:19.086399 2934 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3c9458d-3506-4ae4-801e-b2be3c20a3f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:12:19.086566 kubelet[2934]: E0707 00:12:19.086422 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3c9458d-3506-4ae4-801e-b2be3c20a3f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b8gzw" podUID="e3c9458d-3506-4ae4-801e-b2be3c20a3f6" Jul 7 00:12:19.089696 containerd[1622]: time="2025-07-07T00:12:19.089622038Z" level=error msg="StopPodSandbox for \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\" failed" error="failed to destroy network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:19.090029 kubelet[2934]: E0707 00:12:19.089945 2934 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:19.090029 kubelet[2934]: E0707 00:12:19.089991 2934 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518"} Jul 7 00:12:19.090029 kubelet[2934]: E0707 00:12:19.090016 2934 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87d7b550-b844-4390-a08c-837789bc924f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:12:19.090151 kubelet[2934]: E0707 00:12:19.090037 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87d7b550-b844-4390-a08c-837789bc924f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2vh4" podUID="87d7b550-b844-4390-a08c-837789bc924f" Jul 7 00:12:19.092501 containerd[1622]: time="2025-07-07T00:12:19.092446094Z" level=error msg="StopPodSandbox for \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\" failed" error="failed to destroy network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:12:19.092931 kubelet[2934]: E0707 00:12:19.092838 2934 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:19.092931 kubelet[2934]: E0707 00:12:19.092923 2934 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0"} Jul 7 00:12:19.093187 kubelet[2934]: E0707 00:12:19.092968 2934 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91dc6538-4119-44ef-989f-134b70575b3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:12:19.093187 kubelet[2934]: E0707 00:12:19.092996 2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91dc6538-4119-44ef-989f-134b70575b3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65849599bc-hq8xg" podUID="91dc6538-4119-44ef-989f-134b70575b3d" Jul 7 00:12:26.203065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173261446.mount: Deactivated successfully. Jul 7 00:12:26.315346 containerd[1622]: time="2025-07-07T00:12:26.269558145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:12:26.316189 containerd[1622]: time="2025-07-07T00:12:26.316095602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:26.328670 containerd[1622]: time="2025-07-07T00:12:26.327930562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.474639961s" Jul 7 00:12:26.328670 containerd[1622]: time="2025-07-07T00:12:26.327990905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:12:26.334644 containerd[1622]: time="2025-07-07T00:12:26.334557387Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:26.335732 containerd[1622]: time="2025-07-07T00:12:26.335699318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:26.369051 containerd[1622]: time="2025-07-07T00:12:26.368979857Z" level=info msg="CreateContainer within sandbox \"2c1a1aa04efaa64d167f610dd02d3b78a35cfa135fc911abb4f25d1e59dcd7ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:12:26.465548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246986599.mount: Deactivated successfully. Jul 7 00:12:26.470588 containerd[1622]: time="2025-07-07T00:12:26.470493631Z" level=info msg="CreateContainer within sandbox \"2c1a1aa04efaa64d167f610dd02d3b78a35cfa135fc911abb4f25d1e59dcd7ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0dee5ffe822ca6cb85832200a4f4e2c8815a62cd2373cdf0fc4ebb4d63d5ca91\"" Jul 7 00:12:26.471530 containerd[1622]: time="2025-07-07T00:12:26.471441678Z" level=info msg="StartContainer for \"0dee5ffe822ca6cb85832200a4f4e2c8815a62cd2373cdf0fc4ebb4d63d5ca91\"" Jul 7 00:12:26.654899 containerd[1622]: time="2025-07-07T00:12:26.654799305Z" level=info msg="StartContainer for \"0dee5ffe822ca6cb85832200a4f4e2c8815a62cd2373cdf0fc4ebb4d63d5ca91\" returns successfully" Jul 7 00:12:26.761926 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:12:26.764044 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:12:26.863919 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:12:26.859848 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:12:26.859968 systemd-resolved[1515]: Flushed all caches. Jul 7 00:12:27.033952 kubelet[2934]: I0707 00:12:27.011539 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cddmm" podStartSLOduration=1.721838312 podStartE2EDuration="22.987412111s" podCreationTimestamp="2025-07-07 00:12:04 +0000 UTC" firstStartedPulling="2025-07-07 00:12:05.069053228 +0000 UTC m=+19.524588391" lastFinishedPulling="2025-07-07 00:12:26.334627027 +0000 UTC m=+40.790162190" observedRunningTime="2025-07-07 00:12:26.984835519 +0000 UTC m=+41.440370702" watchObservedRunningTime="2025-07-07 00:12:26.987412111 +0000 UTC m=+41.442947293" Jul 7 00:12:27.104596 containerd[1622]: time="2025-07-07T00:12:27.104344954Z" level=info msg="StopPodSandbox for \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\"" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.196 [INFO][4139] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.199 [INFO][4139] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" iface="eth0" netns="/var/run/netns/cni-6da2c849-19cb-ea52-89c7-38792f3af5d0" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.201 [INFO][4139] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" iface="eth0" netns="/var/run/netns/cni-6da2c849-19cb-ea52-89c7-38792f3af5d0" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.203 [INFO][4139] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" iface="eth0" netns="/var/run/netns/cni-6da2c849-19cb-ea52-89c7-38792f3af5d0" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.203 [INFO][4139] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.203 [INFO][4139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.427 [INFO][4146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.430 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.431 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.446 [WARNING][4146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.447 [INFO][4146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.449 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:27.457584 containerd[1622]: 2025-07-07 00:12:27.452 [INFO][4139] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:27.464690 containerd[1622]: time="2025-07-07T00:12:27.460810962Z" level=info msg="TearDown network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\" successfully" Jul 7 00:12:27.464690 containerd[1622]: time="2025-07-07T00:12:27.460869262Z" level=info msg="StopPodSandbox for \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\" returns successfully" Jul 7 00:12:27.469350 systemd[1]: run-netns-cni\x2d6da2c849\x2d19cb\x2dea52\x2d89c7\x2d38792f3af5d0.mount: Deactivated successfully. Jul 7 00:12:27.638343 kubelet[2934]: I0707 00:12:27.638192 2934 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr9rq\" (UniqueName: \"kubernetes.io/projected/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-kube-api-access-qr9rq\") pod \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\" (UID: \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\") " Jul 7 00:12:27.649138 kubelet[2934]: I0707 00:12:27.648560 2934 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-whisker-ca-bundle\") pod \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\" (UID: \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\") " Jul 7 00:12:27.649138 kubelet[2934]: I0707 00:12:27.648648 2934 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-whisker-backend-key-pair\") pod \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\" (UID: \"df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4\") " Jul 7 00:12:27.672328 kubelet[2934]: I0707 00:12:27.670151 2934 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4" (UID: "df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:12:27.680872 kubelet[2934]: I0707 00:12:27.680795 2934 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-kube-api-access-qr9rq" (OuterVolumeSpecName: "kube-api-access-qr9rq") pod "df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4" (UID: "df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4"). InnerVolumeSpecName "kube-api-access-qr9rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:12:27.682938 kubelet[2934]: I0707 00:12:27.682851 2934 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4" (UID: "df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:12:27.683306 systemd[1]: var-lib-kubelet-pods-df9c7771\x2d2c13\x2d414f\x2dad4d\x2ddfbf8c3c7ca4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqr9rq.mount: Deactivated successfully. Jul 7 00:12:27.683622 systemd[1]: var-lib-kubelet-pods-df9c7771\x2d2c13\x2d414f\x2dad4d\x2ddfbf8c3c7ca4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:12:27.749598 kubelet[2934]: I0707 00:12:27.749339 2934 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-whisker-ca-bundle\") on node \"ci-4081-3-4-d-d476fda7c5\" DevicePath \"\"" Jul 7 00:12:27.749598 kubelet[2934]: I0707 00:12:27.749425 2934 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-whisker-backend-key-pair\") on node \"ci-4081-3-4-d-d476fda7c5\" DevicePath \"\"" Jul 7 00:12:27.749598 kubelet[2934]: I0707 00:12:27.749447 2934 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr9rq\" (UniqueName: \"kubernetes.io/projected/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4-kube-api-access-qr9rq\") on node \"ci-4081-3-4-d-d476fda7c5\" DevicePath \"\"" Jul 7 00:12:27.976487 kubelet[2934]: I0707 00:12:27.975391 2934 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:12:28.153933 kubelet[2934]: I0707 00:12:28.153846 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkvtv\" (UniqueName: \"kubernetes.io/projected/b377850a-5d23-4fba-8751-1f7d0f183b06-kube-api-access-gkvtv\") pod \"whisker-7f5f6d488b-6gv5w\" (UID: \"b377850a-5d23-4fba-8751-1f7d0f183b06\") " pod="calico-system/whisker-7f5f6d488b-6gv5w" Jul 7 00:12:28.153933 kubelet[2934]: I0707 00:12:28.153917 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b377850a-5d23-4fba-8751-1f7d0f183b06-whisker-ca-bundle\") pod \"whisker-7f5f6d488b-6gv5w\" (UID: \"b377850a-5d23-4fba-8751-1f7d0f183b06\") " pod="calico-system/whisker-7f5f6d488b-6gv5w" Jul 7 00:12:28.153933 kubelet[2934]: I0707 00:12:28.153939 2934 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b377850a-5d23-4fba-8751-1f7d0f183b06-whisker-backend-key-pair\") pod \"whisker-7f5f6d488b-6gv5w\" (UID: \"b377850a-5d23-4fba-8751-1f7d0f183b06\") " pod="calico-system/whisker-7f5f6d488b-6gv5w" Jul 7 00:12:28.402956 containerd[1622]: time="2025-07-07T00:12:28.398277378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5f6d488b-6gv5w,Uid:b377850a-5d23-4fba-8751-1f7d0f183b06,Namespace:calico-system,Attempt:0,}" Jul 7 00:12:28.778731 systemd-networkd[1252]: cali3b6795b284e: Link UP Jul 7 00:12:28.778993 systemd-networkd[1252]: cali3b6795b284e: Gained carrier Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.606 [INFO][4254] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.629 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0 whisker-7f5f6d488b- calico-system b377850a-5d23-4fba-8751-1f7d0f183b06 879 0 2025-07-07 00:12:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f5f6d488b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-d-d476fda7c5 whisker-7f5f6d488b-6gv5w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3b6795b284e [] [] }} ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Namespace="calico-system" Pod="whisker-7f5f6d488b-6gv5w" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.629 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Namespace="calico-system" Pod="whisker-7f5f6d488b-6gv5w" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.702 [INFO][4270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" HandleID="k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.703 [INFO][4270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" HandleID="k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-d-d476fda7c5", "pod":"whisker-7f5f6d488b-6gv5w", "timestamp":"2025-07-07 00:12:28.702895192 +0000 UTC"}, Hostname:"ci-4081-3-4-d-d476fda7c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.703 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.703 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.703 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-d-d476fda7c5' Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.714 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.724 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.732 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.735 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.738 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.738 [INFO][4270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.740 [INFO][4270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.746 [INFO][4270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.756 [INFO][4270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.65/26] block=192.168.18.64/26 handle="k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.756 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.65/26] handle="k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.756 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:28.798283 containerd[1622]: 2025-07-07 00:12:28.756 [INFO][4270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.65/26] IPv6=[] ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" HandleID="k8s-pod-network.934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" Jul 7 00:12:28.801038 containerd[1622]: 2025-07-07 00:12:28.759 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Namespace="calico-system" Pod="whisker-7f5f6d488b-6gv5w" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0", GenerateName:"whisker-7f5f6d488b-", Namespace:"calico-system", SelfLink:"", UID:"b377850a-5d23-4fba-8751-1f7d0f183b06", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f5f6d488b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"", Pod:"whisker-7f5f6d488b-6gv5w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b6795b284e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:28.801038 containerd[1622]: 2025-07-07 00:12:28.759 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.65/32] ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Namespace="calico-system" Pod="whisker-7f5f6d488b-6gv5w" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" Jul 7 00:12:28.801038 containerd[1622]: 2025-07-07 00:12:28.760 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b6795b284e ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Namespace="calico-system" Pod="whisker-7f5f6d488b-6gv5w" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" Jul 7 00:12:28.801038 containerd[1622]: 2025-07-07 00:12:28.776 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Namespace="calico-system" Pod="whisker-7f5f6d488b-6gv5w" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" Jul 7 00:12:28.801038 containerd[1622]: 2025-07-07 00:12:28.777 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Namespace="calico-system" Pod="whisker-7f5f6d488b-6gv5w" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0", GenerateName:"whisker-7f5f6d488b-", Namespace:"calico-system", SelfLink:"", UID:"b377850a-5d23-4fba-8751-1f7d0f183b06", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f5f6d488b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e", Pod:"whisker-7f5f6d488b-6gv5w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b6795b284e", MAC:"f6:a4:f4:05:e2:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:28.801038 containerd[1622]: 2025-07-07 00:12:28.793 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e" Namespace="calico-system" Pod="whisker-7f5f6d488b-6gv5w" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--7f5f6d488b--6gv5w-eth0" Jul 7 00:12:28.859139 containerd[1622]: time="2025-07-07T00:12:28.859031621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:28.859139 containerd[1622]: time="2025-07-07T00:12:28.859105580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:28.859344 containerd[1622]: time="2025-07-07T00:12:28.859118194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:28.859344 containerd[1622]: time="2025-07-07T00:12:28.859204685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:28.911610 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:12:28.910177 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:12:28.910189 systemd-resolved[1515]: Flushed all caches. Jul 7 00:12:28.936249 containerd[1622]: time="2025-07-07T00:12:28.936010824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5f6d488b-6gv5w,Uid:b377850a-5d23-4fba-8751-1f7d0f183b06,Namespace:calico-system,Attempt:0,} returns sandbox id \"934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e\"" Jul 7 00:12:28.941355 containerd[1622]: time="2025-07-07T00:12:28.940684756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:12:29.661468 kubelet[2934]: I0707 00:12:29.661388 2934 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4" path="/var/lib/kubelet/pods/df9c7771-2c13-414f-ad4d-dfbf8c3c7ca4/volumes" Jul 7 00:12:29.803888 systemd-networkd[1252]: cali3b6795b284e: Gained IPv6LL Jul 7 00:12:30.659357 containerd[1622]: time="2025-07-07T00:12:30.658602427Z" level=info msg="StopPodSandbox for \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\"" Jul 7 00:12:30.660910 containerd[1622]: time="2025-07-07T00:12:30.659719681Z" level=info msg="StopPodSandbox for \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\"" Jul 7 00:12:30.660910 containerd[1622]: time="2025-07-07T00:12:30.659835228Z" level=info msg="StopPodSandbox for \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\"" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.798 [INFO][4376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.801 [INFO][4376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" iface="eth0" netns="/var/run/netns/cni-715010b7-aa34-903d-f7c3-58a3b31f9ca8" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.801 [INFO][4376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" iface="eth0" netns="/var/run/netns/cni-715010b7-aa34-903d-f7c3-58a3b31f9ca8" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.802 [INFO][4376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" iface="eth0" netns="/var/run/netns/cni-715010b7-aa34-903d-f7c3-58a3b31f9ca8" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.802 [INFO][4376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.802 [INFO][4376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.880 [INFO][4407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.881 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.881 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.893 [WARNING][4407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.893 [INFO][4407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.894 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:30.929311 containerd[1622]: 2025-07-07 00:12:30.903 [INFO][4376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:30.929311 containerd[1622]: time="2025-07-07T00:12:30.927854440Z" level=info msg="TearDown network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\" successfully" Jul 7 00:12:30.929311 containerd[1622]: time="2025-07-07T00:12:30.927890198Z" level=info msg="StopPodSandbox for \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\" returns successfully" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.770 [INFO][4384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.770 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" iface="eth0" netns="/var/run/netns/cni-5f78c077-9f59-0db0-3a18-db6880a38c75" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.771 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" iface="eth0" netns="/var/run/netns/cni-5f78c077-9f59-0db0-3a18-db6880a38c75" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.771 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" iface="eth0" netns="/var/run/netns/cni-5f78c077-9f59-0db0-3a18-db6880a38c75" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.771 [INFO][4384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.771 [INFO][4384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.880 [INFO][4399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.881 [INFO][4399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.895 [INFO][4399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.919 [WARNING][4399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.919 [INFO][4399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.923 [INFO][4399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:30.935246 containerd[1622]: 2025-07-07 00:12:30.926 [INFO][4384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:30.932447 systemd[1]: run-netns-cni\x2d715010b7\x2daa34\x2d903d\x2df7c3\x2d58a3b31f9ca8.mount: Deactivated successfully. Jul 7 00:12:30.935913 containerd[1622]: time="2025-07-07T00:12:30.935810868Z" level=info msg="TearDown network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\" successfully" Jul 7 00:12:30.935913 containerd[1622]: time="2025-07-07T00:12:30.935857816Z" level=info msg="StopPodSandbox for \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\" returns successfully" Jul 7 00:12:30.938870 systemd[1]: run-netns-cni\x2d5f78c077\x2d9f59\x2d0db0\x2d3a18\x2ddb6880a38c75.mount: Deactivated successfully. Jul 7 00:12:30.942530 containerd[1622]: time="2025-07-07T00:12:30.940050928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b8gzw,Uid:e3c9458d-3506-4ae4-801e-b2be3c20a3f6,Namespace:kube-system,Attempt:1,}" Jul 7 00:12:30.944257 containerd[1622]: time="2025-07-07T00:12:30.942878009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-98rfd,Uid:d6e879d5-3fea-4271-b519-f5824551a918,Namespace:calico-system,Attempt:1,}" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.792 [INFO][4380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.794 [INFO][4380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" iface="eth0" netns="/var/run/netns/cni-f91035b2-9915-dc72-1ac2-713c90a9664c" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.794 [INFO][4380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" iface="eth0" netns="/var/run/netns/cni-f91035b2-9915-dc72-1ac2-713c90a9664c" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.795 [INFO][4380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" iface="eth0" netns="/var/run/netns/cni-f91035b2-9915-dc72-1ac2-713c90a9664c" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.795 [INFO][4380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.795 [INFO][4380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.903 [INFO][4405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.903 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.923 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.948 [WARNING][4405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.948 [INFO][4405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.953 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:30.981403 containerd[1622]: 2025-07-07 00:12:30.964 [INFO][4380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:30.982227 containerd[1622]: time="2025-07-07T00:12:30.982189245Z" level=info msg="TearDown network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\" successfully" Jul 7 00:12:30.982307 containerd[1622]: time="2025-07-07T00:12:30.982279404Z" level=info msg="StopPodSandbox for \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\" returns successfully" Jul 7 00:12:30.986582 containerd[1622]: time="2025-07-07T00:12:30.986562074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz5tm,Uid:1b44121e-b0a3-4592-b84f-01e54dbb20d5,Namespace:kube-system,Attempt:1,}" Jul 7 00:12:30.988807 systemd[1]: run-netns-cni\x2df91035b2\x2d9915\x2ddc72\x2d1ac2\x2d713c90a9664c.mount: Deactivated successfully. Jul 7 00:12:31.255789 systemd-networkd[1252]: cali36da9362fc1: Link UP Jul 7 00:12:31.255960 systemd-networkd[1252]: cali36da9362fc1: Gained carrier Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.112 [INFO][4444] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.141 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0 goldmane-58fd7646b9- calico-system d6e879d5-3fea-4271-b519-f5824551a918 894 0 2025-07-07 00:12:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-d-d476fda7c5 goldmane-58fd7646b9-98rfd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali36da9362fc1 [] [] }} ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Namespace="calico-system" Pod="goldmane-58fd7646b9-98rfd" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.142 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Namespace="calico-system" Pod="goldmane-58fd7646b9-98rfd" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.192 [INFO][4479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" HandleID="k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.193 [INFO][4479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" HandleID="k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-d-d476fda7c5", "pod":"goldmane-58fd7646b9-98rfd", "timestamp":"2025-07-07 00:12:31.192930695 +0000 UTC"}, Hostname:"ci-4081-3-4-d-d476fda7c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.193 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.193 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.193 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-d-d476fda7c5' Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.204 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.212 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.219 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.221 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.224 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.224 [INFO][4479] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.226 [INFO][4479] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5 Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.233 [INFO][4479] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.239 [INFO][4479] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.66/26] block=192.168.18.64/26 handle="k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.239 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.66/26] handle="k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.239 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:31.309904 containerd[1622]: 2025-07-07 00:12:31.239 [INFO][4479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.66/26] IPv6=[] ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" HandleID="k8s-pod-network.73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:31.310584 containerd[1622]: 2025-07-07 00:12:31.246 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Namespace="calico-system" Pod="goldmane-58fd7646b9-98rfd" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d6e879d5-3fea-4271-b519-f5824551a918", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"", Pod:"goldmane-58fd7646b9-98rfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36da9362fc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:31.310584 containerd[1622]: 2025-07-07 00:12:31.247 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.66/32] ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Namespace="calico-system" Pod="goldmane-58fd7646b9-98rfd" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:31.310584 containerd[1622]: 2025-07-07 00:12:31.247 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36da9362fc1 ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Namespace="calico-system" Pod="goldmane-58fd7646b9-98rfd" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:31.310584 containerd[1622]: 2025-07-07 00:12:31.255 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Namespace="calico-system" Pod="goldmane-58fd7646b9-98rfd" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:31.310584 containerd[1622]: 2025-07-07 00:12:31.261 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Namespace="calico-system" Pod="goldmane-58fd7646b9-98rfd" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d6e879d5-3fea-4271-b519-f5824551a918", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5", Pod:"goldmane-58fd7646b9-98rfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36da9362fc1", MAC:"b2:99:f0:51:67:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:31.310584 containerd[1622]: 2025-07-07 00:12:31.295 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5" Namespace="calico-system" Pod="goldmane-58fd7646b9-98rfd" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:31.465515 containerd[1622]: time="2025-07-07T00:12:31.463197972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:31.465515 containerd[1622]: time="2025-07-07T00:12:31.463268234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:31.465515 containerd[1622]: time="2025-07-07T00:12:31.463282541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:31.465515 containerd[1622]: time="2025-07-07T00:12:31.463384452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:31.502485 systemd-networkd[1252]: cali5f925927565: Link UP Jul 7 00:12:31.503728 systemd-networkd[1252]: cali5f925927565: Gained carrier Jul 7 00:12:31.506954 containerd[1622]: time="2025-07-07T00:12:31.506738408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:31.511240 containerd[1622]: time="2025-07-07T00:12:31.510624745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:12:31.518745 containerd[1622]: time="2025-07-07T00:12:31.517958225Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:31.529010 containerd[1622]: time="2025-07-07T00:12:31.528422634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:31.531176 containerd[1622]: time="2025-07-07T00:12:31.531108461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.590379552s" Jul 7 00:12:31.531869 containerd[1622]: time="2025-07-07T00:12:31.531685893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:12:31.539240 containerd[1622]: time="2025-07-07T00:12:31.539153565Z" level=info msg="CreateContainer within sandbox \"934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.142 [INFO][4454] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.170 [INFO][4454] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0 coredns-7c65d6cfc9- kube-system 1b44121e-b0a3-4592-b84f-01e54dbb20d5 895 0 2025-07-07 00:11:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-d-d476fda7c5 coredns-7c65d6cfc9-tz5tm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f925927565 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz5tm" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.170 [INFO][4454] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz5tm" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.211 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" HandleID="k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.211 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" HandleID="k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-d-d476fda7c5", "pod":"coredns-7c65d6cfc9-tz5tm", "timestamp":"2025-07-07 00:12:31.211143492 +0000 UTC"}, Hostname:"ci-4081-3-4-d-d476fda7c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.212 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.240 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.241 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-d-d476fda7c5' Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.329 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.374 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.412 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.422 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.442 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.443 [INFO][4490] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.447 [INFO][4490] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.459 [INFO][4490] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.470 [INFO][4490] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.67/26] block=192.168.18.64/26 handle="k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.470 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.67/26] handle="k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.470 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:31.548537 containerd[1622]: 2025-07-07 00:12:31.470 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.67/26] IPv6=[] ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" HandleID="k8s-pod-network.f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:31.550930 containerd[1622]: 2025-07-07 00:12:31.495 [INFO][4454] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz5tm" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1b44121e-b0a3-4592-b84f-01e54dbb20d5", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"", Pod:"coredns-7c65d6cfc9-tz5tm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f925927565", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:31.550930 containerd[1622]: 2025-07-07 00:12:31.496 [INFO][4454] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.67/32] ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz5tm" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:31.550930 containerd[1622]: 2025-07-07 00:12:31.496 [INFO][4454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f925927565 ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz5tm" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:31.550930 containerd[1622]: 2025-07-07 00:12:31.505 [INFO][4454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz5tm" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:31.550930 containerd[1622]: 2025-07-07 00:12:31.509 [INFO][4454] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz5tm" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1b44121e-b0a3-4592-b84f-01e54dbb20d5", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c", Pod:"coredns-7c65d6cfc9-tz5tm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f925927565", MAC:"a2:aa:98:c0:8b:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:31.550930 containerd[1622]: 2025-07-07 00:12:31.538 [INFO][4454] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz5tm" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:31.564819 containerd[1622]: time="2025-07-07T00:12:31.564743605Z" level=info msg="CreateContainer within sandbox \"934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d00f93a9051e4a92700339873e4a2cf9e6024826049c803418589b84d3195653\"" Jul 7 00:12:31.566678 containerd[1622]: time="2025-07-07T00:12:31.566637676Z" level=info msg="StartContainer for \"d00f93a9051e4a92700339873e4a2cf9e6024826049c803418589b84d3195653\"" Jul 7 00:12:31.583651 systemd-networkd[1252]: cali10aef6d942f: Link UP Jul 7 00:12:31.583865 systemd-networkd[1252]: cali10aef6d942f: Gained carrier Jul 7 00:12:31.600483 containerd[1622]: time="2025-07-07T00:12:31.600102370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:31.601848 containerd[1622]: time="2025-07-07T00:12:31.600794599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:31.602118 containerd[1622]: time="2025-07-07T00:12:31.601989298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:31.603221 containerd[1622]: time="2025-07-07T00:12:31.603019551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.112 [INFO][4435] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.139 [INFO][4435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0 coredns-7c65d6cfc9- kube-system e3c9458d-3506-4ae4-801e-b2be3c20a3f6 893 0 2025-07-07 00:11:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-d-d476fda7c5 coredns-7c65d6cfc9-b8gzw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10aef6d942f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b8gzw" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.139 [INFO][4435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b8gzw" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.215 [INFO][4484] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" HandleID="k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.215 [INFO][4484] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" HandleID="k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-d-d476fda7c5", "pod":"coredns-7c65d6cfc9-b8gzw", "timestamp":"2025-07-07 00:12:31.215313712 +0000 UTC"}, Hostname:"ci-4081-3-4-d-d476fda7c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.215 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.473 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.473 [INFO][4484] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-d-d476fda7c5' Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.506 [INFO][4484] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.515 [INFO][4484] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.533 [INFO][4484] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.538 [INFO][4484] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.544 [INFO][4484] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.544 [INFO][4484] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.547 [INFO][4484] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.560 [INFO][4484] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.573 [INFO][4484] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.68/26] block=192.168.18.64/26 handle="k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.574 [INFO][4484] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.68/26] handle="k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.574 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:31.612218 containerd[1622]: 2025-07-07 00:12:31.574 [INFO][4484] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.68/26] IPv6=[] ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" HandleID="k8s-pod-network.10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:31.613423 containerd[1622]: 2025-07-07 00:12:31.579 [INFO][4435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b8gzw" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e3c9458d-3506-4ae4-801e-b2be3c20a3f6", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"", Pod:"coredns-7c65d6cfc9-b8gzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10aef6d942f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:31.613423 containerd[1622]: 2025-07-07 00:12:31.579 [INFO][4435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.68/32] ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b8gzw" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:31.613423 containerd[1622]: 2025-07-07 00:12:31.580 [INFO][4435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10aef6d942f ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b8gzw" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:31.613423 containerd[1622]: 2025-07-07 00:12:31.583 [INFO][4435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b8gzw" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:31.613423 containerd[1622]: 2025-07-07 00:12:31.583 [INFO][4435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b8gzw" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e3c9458d-3506-4ae4-801e-b2be3c20a3f6", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f", Pod:"coredns-7c65d6cfc9-b8gzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10aef6d942f", MAC:"6a:df:c0:83:26:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:31.613423 containerd[1622]: 2025-07-07 00:12:31.607 [INFO][4435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b8gzw" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:31.639853 containerd[1622]: time="2025-07-07T00:12:31.639800763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-98rfd,Uid:d6e879d5-3fea-4271-b519-f5824551a918,Namespace:calico-system,Attempt:1,} returns sandbox id \"73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5\"" Jul 7 00:12:31.643107 containerd[1622]: time="2025-07-07T00:12:31.642997246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:12:31.661895 containerd[1622]: time="2025-07-07T00:12:31.661859392Z" level=info msg="StopPodSandbox for \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\"" Jul 7 00:12:31.675961 containerd[1622]: time="2025-07-07T00:12:31.673431118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:31.675961 containerd[1622]: time="2025-07-07T00:12:31.673527559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:31.675961 containerd[1622]: time="2025-07-07T00:12:31.673548108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:31.680709 containerd[1622]: time="2025-07-07T00:12:31.673651001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:31.738044 containerd[1622]: time="2025-07-07T00:12:31.737713635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz5tm,Uid:1b44121e-b0a3-4592-b84f-01e54dbb20d5,Namespace:kube-system,Attempt:1,} returns sandbox id \"f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c\"" Jul 7 00:12:31.748106 containerd[1622]: time="2025-07-07T00:12:31.748061597Z" level=info msg="CreateContainer within sandbox \"f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:12:31.798857 containerd[1622]: time="2025-07-07T00:12:31.798353021Z" level=info msg="StartContainer for \"d00f93a9051e4a92700339873e4a2cf9e6024826049c803418589b84d3195653\" returns successfully" Jul 7 00:12:31.805414 containerd[1622]: time="2025-07-07T00:12:31.804522427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b8gzw,Uid:e3c9458d-3506-4ae4-801e-b2be3c20a3f6,Namespace:kube-system,Attempt:1,} returns sandbox id \"10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f\"" Jul 7 00:12:31.813197 containerd[1622]: time="2025-07-07T00:12:31.812991517Z" level=info msg="CreateContainer within sandbox \"10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.722 [INFO][4646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.723 [INFO][4646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" iface="eth0" netns="/var/run/netns/cni-d9995dc9-a5b4-36d7-1e3d-fec122bb22d6" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.724 [INFO][4646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" iface="eth0" netns="/var/run/netns/cni-d9995dc9-a5b4-36d7-1e3d-fec122bb22d6" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.725 [INFO][4646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" iface="eth0" netns="/var/run/netns/cni-d9995dc9-a5b4-36d7-1e3d-fec122bb22d6" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.725 [INFO][4646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.725 [INFO][4646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.780 [INFO][4672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.781 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.781 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.807 [WARNING][4672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.807 [INFO][4672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.812 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:31.820173 containerd[1622]: 2025-07-07 00:12:31.817 [INFO][4646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:31.821200 containerd[1622]: time="2025-07-07T00:12:31.820944258Z" level=info msg="TearDown network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\" successfully" Jul 7 00:12:31.821200 containerd[1622]: time="2025-07-07T00:12:31.820981247Z" level=info msg="StopPodSandbox for \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\" returns successfully" Jul 7 00:12:31.821782 containerd[1622]: time="2025-07-07T00:12:31.821725502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65849599bc-hq8xg,Uid:91dc6538-4119-44ef-989f-134b70575b3d,Namespace:calico-system,Attempt:1,}" Jul 7 00:12:31.825826 containerd[1622]: time="2025-07-07T00:12:31.825709653Z" level=info msg="CreateContainer within sandbox \"f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b3bdd3bf77dffaeb23fc3227de6d695dd14f3462f651866b4b3f3641772d904\"" Jul 7 00:12:31.826534 containerd[1622]: time="2025-07-07T00:12:31.826492992Z" level=info msg="StartContainer for \"1b3bdd3bf77dffaeb23fc3227de6d695dd14f3462f651866b4b3f3641772d904\"" Jul 7 00:12:31.846942 containerd[1622]: time="2025-07-07T00:12:31.846495395Z" level=info msg="CreateContainer within sandbox \"10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d79bfc8ce5498186e1cba37f1f48e4efa54c654bd78da6836d6b1891559ea993\"" Jul 7 00:12:31.851103 containerd[1622]: time="2025-07-07T00:12:31.851057017Z" level=info msg="StartContainer for \"d79bfc8ce5498186e1cba37f1f48e4efa54c654bd78da6836d6b1891559ea993\"" Jul 7 00:12:31.921437 containerd[1622]: time="2025-07-07T00:12:31.921267939Z" level=info msg="StartContainer for \"1b3bdd3bf77dffaeb23fc3227de6d695dd14f3462f651866b4b3f3641772d904\" returns successfully" Jul 7 00:12:31.952046 systemd[1]: run-netns-cni\x2dd9995dc9\x2da5b4\x2d36d7\x2d1e3d\x2dfec122bb22d6.mount: Deactivated successfully. Jul 7 00:12:32.028967 containerd[1622]: time="2025-07-07T00:12:32.028717452Z" level=info msg="StartContainer for \"d79bfc8ce5498186e1cba37f1f48e4efa54c654bd78da6836d6b1891559ea993\" returns successfully" Jul 7 00:12:32.068637 systemd-networkd[1252]: calibb91de1cda4: Link UP Jul 7 00:12:32.073904 systemd-networkd[1252]: calibb91de1cda4: Gained carrier Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:31.919 [INFO][4732] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:31.947 [INFO][4732] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0 calico-kube-controllers-65849599bc- calico-system 91dc6538-4119-44ef-989f-134b70575b3d 911 0 2025-07-07 00:12:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65849599bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-d-d476fda7c5 calico-kube-controllers-65849599bc-hq8xg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibb91de1cda4 [] [] }} ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Namespace="calico-system" Pod="calico-kube-controllers-65849599bc-hq8xg" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:31.948 [INFO][4732] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Namespace="calico-system" Pod="calico-kube-controllers-65849599bc-hq8xg" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.002 [INFO][4771] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" HandleID="k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.002 [INFO][4771] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" HandleID="k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-d-d476fda7c5", "pod":"calico-kube-controllers-65849599bc-hq8xg", "timestamp":"2025-07-07 00:12:32.002672239 +0000 UTC"}, Hostname:"ci-4081-3-4-d-d476fda7c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.003 [INFO][4771] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.003 [INFO][4771] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.003 [INFO][4771] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-d-d476fda7c5' Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.011 [INFO][4771] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.017 [INFO][4771] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.022 [INFO][4771] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.024 [INFO][4771] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.028 [INFO][4771] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.028 [INFO][4771] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.033 [INFO][4771] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238 Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.038 [INFO][4771] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.047 [INFO][4771] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.69/26] block=192.168.18.64/26 handle="k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.047 [INFO][4771] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.69/26] handle="k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.048 [INFO][4771] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:32.124182 containerd[1622]: 2025-07-07 00:12:32.048 [INFO][4771] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.69/26] IPv6=[] ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" HandleID="k8s-pod-network.6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:32.129060 containerd[1622]: 2025-07-07 00:12:32.050 [INFO][4732] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Namespace="calico-system" Pod="calico-kube-controllers-65849599bc-hq8xg" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0", GenerateName:"calico-kube-controllers-65849599bc-", Namespace:"calico-system", SelfLink:"", UID:"91dc6538-4119-44ef-989f-134b70575b3d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65849599bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"", Pod:"calico-kube-controllers-65849599bc-hq8xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb91de1cda4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:32.129060 containerd[1622]: 2025-07-07 00:12:32.051 [INFO][4732] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.69/32] ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Namespace="calico-system" Pod="calico-kube-controllers-65849599bc-hq8xg" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:32.129060 containerd[1622]: 2025-07-07 00:12:32.051 [INFO][4732] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb91de1cda4 ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Namespace="calico-system" Pod="calico-kube-controllers-65849599bc-hq8xg" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:32.129060 containerd[1622]: 2025-07-07 00:12:32.072 [INFO][4732] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Namespace="calico-system" Pod="calico-kube-controllers-65849599bc-hq8xg" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:32.129060 containerd[1622]: 2025-07-07 00:12:32.072 [INFO][4732] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Namespace="calico-system" Pod="calico-kube-controllers-65849599bc-hq8xg" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0", GenerateName:"calico-kube-controllers-65849599bc-", Namespace:"calico-system", SelfLink:"", UID:"91dc6538-4119-44ef-989f-134b70575b3d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65849599bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238", Pod:"calico-kube-controllers-65849599bc-hq8xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb91de1cda4", MAC:"62:e6:bf:e1:32:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:32.129060 containerd[1622]: 2025-07-07 00:12:32.103 [INFO][4732] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238" Namespace="calico-system" Pod="calico-kube-controllers-65849599bc-hq8xg" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:32.179302 kubelet[2934]: I0707 00:12:32.177232 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tz5tm" podStartSLOduration=41.177207983 podStartE2EDuration="41.177207983s" podCreationTimestamp="2025-07-07 00:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:12:32.141816937 +0000 UTC m=+46.597352100" watchObservedRunningTime="2025-07-07 00:12:32.177207983 +0000 UTC m=+46.632743146" Jul 7 00:12:32.189062 containerd[1622]: time="2025-07-07T00:12:32.186816078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:32.189062 containerd[1622]: time="2025-07-07T00:12:32.186889365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:32.189062 containerd[1622]: time="2025-07-07T00:12:32.186900316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:32.189062 containerd[1622]: time="2025-07-07T00:12:32.187007517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:32.365583 containerd[1622]: time="2025-07-07T00:12:32.365359667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65849599bc-hq8xg,Uid:91dc6538-4119-44ef-989f-134b70575b3d,Namespace:calico-system,Attempt:1,} returns sandbox id \"6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238\"" Jul 7 00:12:32.428070 systemd-networkd[1252]: cali36da9362fc1: Gained IPv6LL Jul 7 00:12:32.661271 containerd[1622]: time="2025-07-07T00:12:32.661018251Z" level=info msg="StopPodSandbox for \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\"" Jul 7 00:12:32.760081 kubelet[2934]: I0707 00:12:32.759968 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-b8gzw" podStartSLOduration=41.75993237 podStartE2EDuration="41.75993237s" podCreationTimestamp="2025-07-07 00:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:12:32.182582129 +0000 UTC m=+46.638117293" watchObservedRunningTime="2025-07-07 00:12:32.75993237 +0000 UTC m=+47.215467534" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.761 [INFO][4878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.762 [INFO][4878] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" iface="eth0" netns="/var/run/netns/cni-a0336b92-c9e8-5448-b1d5-843c9bc99ade" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.763 [INFO][4878] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" iface="eth0" netns="/var/run/netns/cni-a0336b92-c9e8-5448-b1d5-843c9bc99ade" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.763 [INFO][4878] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" iface="eth0" netns="/var/run/netns/cni-a0336b92-c9e8-5448-b1d5-843c9bc99ade" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.763 [INFO][4878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.763 [INFO][4878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.787 [INFO][4886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.787 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.788 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.793 [WARNING][4886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.793 [INFO][4886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.795 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:32.800906 containerd[1622]: 2025-07-07 00:12:32.797 [INFO][4878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:32.802144 containerd[1622]: time="2025-07-07T00:12:32.801929092Z" level=info msg="TearDown network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\" successfully" Jul 7 00:12:32.802144 containerd[1622]: time="2025-07-07T00:12:32.801961853Z" level=info msg="StopPodSandbox for \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\" returns successfully" Jul 7 00:12:32.805118 containerd[1622]: time="2025-07-07T00:12:32.805062557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768db8c477-p9bbs,Uid:e655fc02-7710-4160-9f19-efce05fe6db5,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:12:32.805342 systemd[1]: run-netns-cni\x2da0336b92\x2dc9e8\x2d5448\x2db1d5\x2d843c9bc99ade.mount: Deactivated successfully. Jul 7 00:12:32.939837 systemd-networkd[1252]: cali10aef6d942f: Gained IPv6LL Jul 7 00:12:33.076965 systemd-networkd[1252]: cali2e87166f44a: Link UP Jul 7 00:12:33.077181 systemd-networkd[1252]: cali2e87166f44a: Gained carrier Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.855 [INFO][4892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.867 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0 calico-apiserver-768db8c477- calico-apiserver e655fc02-7710-4160-9f19-efce05fe6db5 934 0 2025-07-07 00:12:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768db8c477 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-d-d476fda7c5 calico-apiserver-768db8c477-p9bbs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e87166f44a [] [] }} ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-p9bbs" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.867 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-p9bbs" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.910 [INFO][4904] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" HandleID="k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.910 [INFO][4904] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" HandleID="k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-d-d476fda7c5", "pod":"calico-apiserver-768db8c477-p9bbs", "timestamp":"2025-07-07 00:12:32.910321933 +0000 UTC"}, Hostname:"ci-4081-3-4-d-d476fda7c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.910 [INFO][4904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.910 [INFO][4904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.910 [INFO][4904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-d-d476fda7c5' Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:32.918 [INFO][4904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.008 [INFO][4904] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.022 [INFO][4904] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.033 [INFO][4904] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.043 [INFO][4904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.043 [INFO][4904] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.047 [INFO][4904] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.057 [INFO][4904] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.067 [INFO][4904] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.70/26] block=192.168.18.64/26 handle="k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.067 [INFO][4904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.70/26] handle="k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.067 [INFO][4904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:33.108126 containerd[1622]: 2025-07-07 00:12:33.068 [INFO][4904] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.70/26] IPv6=[] ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" HandleID="k8s-pod-network.808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:33.110630 containerd[1622]: 2025-07-07 00:12:33.072 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-p9bbs" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0", GenerateName:"calico-apiserver-768db8c477-", Namespace:"calico-apiserver", SelfLink:"", UID:"e655fc02-7710-4160-9f19-efce05fe6db5", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768db8c477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"", Pod:"calico-apiserver-768db8c477-p9bbs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e87166f44a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:33.110630 containerd[1622]: 2025-07-07 00:12:33.073 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.70/32] ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-p9bbs" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:33.110630 containerd[1622]: 2025-07-07 00:12:33.073 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e87166f44a ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-p9bbs" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:33.110630 containerd[1622]: 2025-07-07 00:12:33.078 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-p9bbs" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:33.110630 containerd[1622]: 2025-07-07 00:12:33.082 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-p9bbs" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0", GenerateName:"calico-apiserver-768db8c477-", Namespace:"calico-apiserver", SelfLink:"", UID:"e655fc02-7710-4160-9f19-efce05fe6db5", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768db8c477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b", Pod:"calico-apiserver-768db8c477-p9bbs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e87166f44a", MAC:"96:67:89:db:1a:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:33.110630 containerd[1622]: 2025-07-07 00:12:33.104 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-p9bbs" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:33.139717 containerd[1622]: time="2025-07-07T00:12:33.139596351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:33.139899 containerd[1622]: time="2025-07-07T00:12:33.139738667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:33.139899 containerd[1622]: time="2025-07-07T00:12:33.139779964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:33.143776 containerd[1622]: time="2025-07-07T00:12:33.141941668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:33.198724 systemd-networkd[1252]: cali5f925927565: Gained IPv6LL Jul 7 00:12:33.264836 containerd[1622]: time="2025-07-07T00:12:33.264255229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768db8c477-p9bbs,Uid:e655fc02-7710-4160-9f19-efce05fe6db5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b\"" Jul 7 00:12:33.664165 containerd[1622]: time="2025-07-07T00:12:33.662849651Z" level=info msg="StopPodSandbox for \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\"" Jul 7 00:12:33.664165 containerd[1622]: time="2025-07-07T00:12:33.663188126Z" level=info msg="StopPodSandbox for \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\"" Jul 7 00:12:33.773318 systemd-networkd[1252]: calibb91de1cda4: Gained IPv6LL Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.729 [INFO][4998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.729 [INFO][4998] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" iface="eth0" netns="/var/run/netns/cni-fe3abeb4-0502-1604-bb61-ae66dc8530c4" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.736 [INFO][4998] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" iface="eth0" netns="/var/run/netns/cni-fe3abeb4-0502-1604-bb61-ae66dc8530c4" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.737 [INFO][4998] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" iface="eth0" netns="/var/run/netns/cni-fe3abeb4-0502-1604-bb61-ae66dc8530c4" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.738 [INFO][4998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.738 [INFO][4998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.781 [INFO][5016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.781 [INFO][5016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.781 [INFO][5016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.793 [WARNING][5016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.793 [INFO][5016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.795 [INFO][5016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:33.802977 containerd[1622]: 2025-07-07 00:12:33.799 [INFO][4998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:33.806685 containerd[1622]: time="2025-07-07T00:12:33.803641107Z" level=info msg="TearDown network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\" successfully" Jul 7 00:12:33.806685 containerd[1622]: time="2025-07-07T00:12:33.803815564Z" level=info msg="StopPodSandbox for \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\" returns successfully" Jul 7 00:12:33.811831 systemd[1]: run-netns-cni\x2dfe3abeb4\x2d0502\x2d1604\x2dbb61\x2dae66dc8530c4.mount: Deactivated successfully. Jul 7 00:12:33.814835 containerd[1622]: time="2025-07-07T00:12:33.814780023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2vh4,Uid:87d7b550-b844-4390-a08c-837789bc924f,Namespace:calico-system,Attempt:1,}" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.754 [INFO][5002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.754 [INFO][5002] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" iface="eth0" netns="/var/run/netns/cni-4ff16cbf-fbea-c4b0-bb06-c16b6dcc5793" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.756 [INFO][5002] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" iface="eth0" netns="/var/run/netns/cni-4ff16cbf-fbea-c4b0-bb06-c16b6dcc5793" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.757 [INFO][5002] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" iface="eth0" netns="/var/run/netns/cni-4ff16cbf-fbea-c4b0-bb06-c16b6dcc5793" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.757 [INFO][5002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.757 [INFO][5002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.785 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.785 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.795 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.804 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.804 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.807 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:33.817833 containerd[1622]: 2025-07-07 00:12:33.813 [INFO][5002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:33.818269 containerd[1622]: time="2025-07-07T00:12:33.818035676Z" level=info msg="TearDown network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\" successfully" Jul 7 00:12:33.818269 containerd[1622]: time="2025-07-07T00:12:33.818061043Z" level=info msg="StopPodSandbox for \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\" returns successfully" Jul 7 00:12:33.819290 containerd[1622]: time="2025-07-07T00:12:33.819033278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768db8c477-gvl7v,Uid:90e1f2ac-ab1d-4006-a494-a932bbfb41c8,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:12:33.822784 systemd[1]: run-netns-cni\x2d4ff16cbf\x2dfbea\x2dc4b0\x2dbb06\x2dc16b6dcc5793.mount: Deactivated successfully. Jul 7 00:12:34.017477 systemd-networkd[1252]: cali322116de16d: Link UP Jul 7 00:12:34.017628 systemd-networkd[1252]: cali322116de16d: Gained carrier Jul 7 00:12:34.024195 kubelet[2934]: I0707 00:12:34.021959 2934 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.878 [INFO][5029] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.897 [INFO][5029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0 csi-node-driver- calico-system 87d7b550-b844-4390-a08c-837789bc924f 953 0 2025-07-07 00:12:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-d-d476fda7c5 csi-node-driver-w2vh4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali322116de16d [] [] }} ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Namespace="calico-system" Pod="csi-node-driver-w2vh4" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.897 [INFO][5029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Namespace="calico-system" Pod="csi-node-driver-w2vh4" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.963 [INFO][5053] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" HandleID="k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.963 [INFO][5053] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" HandleID="k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-d-d476fda7c5", "pod":"csi-node-driver-w2vh4", "timestamp":"2025-07-07 00:12:33.963610772 +0000 UTC"}, Hostname:"ci-4081-3-4-d-d476fda7c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.964 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.964 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.964 [INFO][5053] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-d-d476fda7c5' Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.974 [INFO][5053] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.987 [INFO][5053] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.994 [INFO][5053] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.996 [INFO][5053] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.999 [INFO][5053] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:33.999 [INFO][5053] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:34.001 [INFO][5053] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5 Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:34.004 [INFO][5053] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:34.012 [INFO][5053] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.71/26] block=192.168.18.64/26 handle="k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:34.012 [INFO][5053] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.71/26] handle="k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:34.012 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:34.044703 containerd[1622]: 2025-07-07 00:12:34.012 [INFO][5053] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.71/26] IPv6=[] ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" HandleID="k8s-pod-network.b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:34.046734 containerd[1622]: 2025-07-07 00:12:34.015 [INFO][5029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Namespace="calico-system" Pod="csi-node-driver-w2vh4" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87d7b550-b844-4390-a08c-837789bc924f", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"", Pod:"csi-node-driver-w2vh4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali322116de16d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:34.046734 containerd[1622]: 2025-07-07 00:12:34.015 [INFO][5029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.71/32] ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Namespace="calico-system" Pod="csi-node-driver-w2vh4" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:34.046734 containerd[1622]: 2025-07-07 00:12:34.016 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali322116de16d ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Namespace="calico-system" Pod="csi-node-driver-w2vh4" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:34.046734 containerd[1622]: 2025-07-07 00:12:34.017 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Namespace="calico-system" Pod="csi-node-driver-w2vh4" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:34.046734 containerd[1622]: 2025-07-07 00:12:34.018 [INFO][5029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Namespace="calico-system" Pod="csi-node-driver-w2vh4" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87d7b550-b844-4390-a08c-837789bc924f", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5", Pod:"csi-node-driver-w2vh4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali322116de16d", MAC:"e2:ed:f1:3e:95:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:34.046734 containerd[1622]: 2025-07-07 00:12:34.037 [INFO][5029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5" Namespace="calico-system" Pod="csi-node-driver-w2vh4" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:34.095705 containerd[1622]: time="2025-07-07T00:12:34.093486962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:34.095705 containerd[1622]: time="2025-07-07T00:12:34.094237760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:34.095705 containerd[1622]: time="2025-07-07T00:12:34.094266644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:34.095705 containerd[1622]: time="2025-07-07T00:12:34.094413740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:34.184729 systemd-networkd[1252]: calib3e4f1af1e5: Link UP Jul 7 00:12:34.184931 systemd-networkd[1252]: calib3e4f1af1e5: Gained carrier Jul 7 00:12:34.187384 containerd[1622]: time="2025-07-07T00:12:34.187349710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2vh4,Uid:87d7b550-b844-4390-a08c-837789bc924f,Namespace:calico-system,Attempt:1,} returns sandbox id \"b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5\"" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:33.892 [INFO][5043] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:33.909 [INFO][5043] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0 calico-apiserver-768db8c477- calico-apiserver 90e1f2ac-ab1d-4006-a494-a932bbfb41c8 954 0 2025-07-07 00:12:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768db8c477 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-d-d476fda7c5 calico-apiserver-768db8c477-gvl7v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib3e4f1af1e5 [] [] }} ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-gvl7v" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:33.909 [INFO][5043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-gvl7v" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:33.982 [INFO][5059] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" HandleID="k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:33.983 [INFO][5059] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" HandleID="k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-d-d476fda7c5", "pod":"calico-apiserver-768db8c477-gvl7v", "timestamp":"2025-07-07 00:12:33.982870612 +0000 UTC"}, Hostname:"ci-4081-3-4-d-d476fda7c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:33.983 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.012 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.012 [INFO][5059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-d-d476fda7c5' Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.075 [INFO][5059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.088 [INFO][5059] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.123 [INFO][5059] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.126 [INFO][5059] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.133 [INFO][5059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.133 [INFO][5059] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.141 [INFO][5059] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.154 [INFO][5059] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.168 [INFO][5059] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.72/26] block=192.168.18.64/26 handle="k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.168 [INFO][5059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.72/26] handle="k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" host="ci-4081-3-4-d-d476fda7c5" Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.168 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:34.203923 containerd[1622]: 2025-07-07 00:12:34.168 [INFO][5059] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.72/26] IPv6=[] ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" HandleID="k8s-pod-network.090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:34.204593 containerd[1622]: 2025-07-07 00:12:34.175 [INFO][5043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-gvl7v" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0", GenerateName:"calico-apiserver-768db8c477-", Namespace:"calico-apiserver", SelfLink:"", UID:"90e1f2ac-ab1d-4006-a494-a932bbfb41c8", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768db8c477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"", Pod:"calico-apiserver-768db8c477-gvl7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3e4f1af1e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:34.204593 containerd[1622]: 2025-07-07 00:12:34.175 [INFO][5043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.72/32] ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-gvl7v" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:34.204593 containerd[1622]: 2025-07-07 00:12:34.175 [INFO][5043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3e4f1af1e5 ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-gvl7v" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:34.204593 containerd[1622]: 2025-07-07 00:12:34.185 [INFO][5043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-gvl7v" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:34.204593 containerd[1622]: 2025-07-07 00:12:34.186 [INFO][5043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-gvl7v" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0", GenerateName:"calico-apiserver-768db8c477-", Namespace:"calico-apiserver", SelfLink:"", UID:"90e1f2ac-ab1d-4006-a494-a932bbfb41c8", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768db8c477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb", Pod:"calico-apiserver-768db8c477-gvl7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3e4f1af1e5", MAC:"6a:ef:02:05:f2:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:34.204593 containerd[1622]: 2025-07-07 00:12:34.198 [INFO][5043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb" Namespace="calico-apiserver" Pod="calico-apiserver-768db8c477-gvl7v" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:34.238573 containerd[1622]: time="2025-07-07T00:12:34.233382369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:12:34.238573 containerd[1622]: time="2025-07-07T00:12:34.233469202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:12:34.238573 containerd[1622]: time="2025-07-07T00:12:34.233482787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:34.238573 containerd[1622]: time="2025-07-07T00:12:34.233613072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:12:34.325050 containerd[1622]: time="2025-07-07T00:12:34.324995718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768db8c477-gvl7v,Uid:90e1f2ac-ab1d-4006-a494-a932bbfb41c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb\"" Jul 7 00:12:34.677605 kubelet[2934]: I0707 00:12:34.675751 2934 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:12:34.897436 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:12:34.860159 systemd-networkd[1252]: cali2e87166f44a: Gained IPv6LL Jul 7 00:12:34.873240 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:12:34.873289 systemd-resolved[1515]: Flushed all caches. Jul 7 00:12:35.319333 kernel: bpftool[5270]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:12:35.325623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607119117.mount: Deactivated successfully. Jul 7 00:12:35.437803 systemd-networkd[1252]: cali322116de16d: Gained IPv6LL Jul 7 00:12:35.691029 systemd-networkd[1252]: vxlan.calico: Link UP Jul 7 00:12:35.691038 systemd-networkd[1252]: vxlan.calico: Gained carrier Jul 7 00:12:35.884602 systemd-networkd[1252]: calib3e4f1af1e5: Gained IPv6LL Jul 7 00:12:36.078688 containerd[1622]: time="2025-07-07T00:12:36.077241318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:36.085932 containerd[1622]: time="2025-07-07T00:12:36.085443867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:12:36.088735 containerd[1622]: time="2025-07-07T00:12:36.088412533Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:36.095243 containerd[1622]: time="2025-07-07T00:12:36.094933649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:36.096649 containerd[1622]: time="2025-07-07T00:12:36.096522139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.453493292s" Jul 7 00:12:36.096649 containerd[1622]: time="2025-07-07T00:12:36.096569277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:12:36.098464 containerd[1622]: time="2025-07-07T00:12:36.098422051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:12:36.107712 containerd[1622]: time="2025-07-07T00:12:36.107588457Z" level=info msg="CreateContainer within sandbox \"73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:12:36.142212 containerd[1622]: time="2025-07-07T00:12:36.140348651Z" level=info msg="CreateContainer within sandbox \"73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f2e9ab9c3b3e54780f9a772e7cc30eccffcf094843d610a0e94a6c9537d1b1af\"" Jul 7 00:12:36.142964 containerd[1622]: time="2025-07-07T00:12:36.142697905Z" level=info msg="StartContainer for \"f2e9ab9c3b3e54780f9a772e7cc30eccffcf094843d610a0e94a6c9537d1b1af\"" Jul 7 00:12:36.222583 systemd[1]: run-containerd-runc-k8s.io-f2e9ab9c3b3e54780f9a772e7cc30eccffcf094843d610a0e94a6c9537d1b1af-runc.PDV4MP.mount: Deactivated successfully. Jul 7 00:12:36.290153 containerd[1622]: time="2025-07-07T00:12:36.289567737Z" level=info msg="StartContainer for \"f2e9ab9c3b3e54780f9a772e7cc30eccffcf094843d610a0e94a6c9537d1b1af\" returns successfully" Jul 7 00:12:37.101015 systemd-networkd[1252]: vxlan.calico: Gained IPv6LL Jul 7 00:12:37.223574 kubelet[2934]: I0707 00:12:37.221338 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-98rfd" podStartSLOduration=28.765966526 podStartE2EDuration="33.221303096s" podCreationTimestamp="2025-07-07 00:12:04 +0000 UTC" firstStartedPulling="2025-07-07 00:12:31.642447515 +0000 UTC m=+46.097982678" lastFinishedPulling="2025-07-07 00:12:36.097784074 +0000 UTC m=+50.553319248" observedRunningTime="2025-07-07 00:12:37.214038595 +0000 UTC m=+51.669573789" watchObservedRunningTime="2025-07-07 00:12:37.221303096 +0000 UTC m=+51.676838289" Jul 7 00:12:37.313196 systemd[1]: run-containerd-runc-k8s.io-f2e9ab9c3b3e54780f9a772e7cc30eccffcf094843d610a0e94a6c9537d1b1af-runc.xxlJ5t.mount: Deactivated successfully. Jul 7 00:12:38.237918 systemd[1]: run-containerd-runc-k8s.io-f2e9ab9c3b3e54780f9a772e7cc30eccffcf094843d610a0e94a6c9537d1b1af-runc.2cbElY.mount: Deactivated successfully. Jul 7 00:12:39.183874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382802207.mount: Deactivated successfully. Jul 7 00:12:39.226792 containerd[1622]: time="2025-07-07T00:12:39.226713558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:39.228324 containerd[1622]: time="2025-07-07T00:12:39.228256500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:12:39.230344 containerd[1622]: time="2025-07-07T00:12:39.230325109Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:39.233770 containerd[1622]: time="2025-07-07T00:12:39.233710486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:39.246225 containerd[1622]: time="2025-07-07T00:12:39.245711107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.14724834s" Jul 7 00:12:39.246225 containerd[1622]: time="2025-07-07T00:12:39.245772673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:12:39.247110 containerd[1622]: time="2025-07-07T00:12:39.247094912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:12:39.249366 containerd[1622]: time="2025-07-07T00:12:39.249332767Z" level=info msg="CreateContainer within sandbox \"934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:12:39.272778 containerd[1622]: time="2025-07-07T00:12:39.272710980Z" level=info msg="CreateContainer within sandbox \"934215ba328e85f3aa0d9ed37ff5e0aed4da83ec496ce3f2b1b3ab243f1bd48e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"38e8f2c01f49304dd677e801c3e9b86706b7d32ee6bac0064ddb7e32297a17ec\"" Jul 7 00:12:39.274213 containerd[1622]: time="2025-07-07T00:12:39.273475012Z" level=info msg="StartContainer for \"38e8f2c01f49304dd677e801c3e9b86706b7d32ee6bac0064ddb7e32297a17ec\"" Jul 7 00:12:39.409645 containerd[1622]: time="2025-07-07T00:12:39.409588897Z" level=info msg="StartContainer for \"38e8f2c01f49304dd677e801c3e9b86706b7d32ee6bac0064ddb7e32297a17ec\" returns successfully" Jul 7 00:12:42.910906 containerd[1622]: time="2025-07-07T00:12:42.910792339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:42.912512 containerd[1622]: time="2025-07-07T00:12:42.912432444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:12:42.915034 containerd[1622]: time="2025-07-07T00:12:42.914955876Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:42.917959 containerd[1622]: time="2025-07-07T00:12:42.917883966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:42.919047 containerd[1622]: time="2025-07-07T00:12:42.918823728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.671625612s" Jul 7 00:12:42.919047 containerd[1622]: time="2025-07-07T00:12:42.918883961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:12:42.921721 containerd[1622]: time="2025-07-07T00:12:42.920696629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:12:42.964147 containerd[1622]: time="2025-07-07T00:12:42.963970404Z" level=info msg="CreateContainer within sandbox \"6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:12:43.079802 containerd[1622]: time="2025-07-07T00:12:43.079716832Z" level=info msg="CreateContainer within sandbox \"6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31\"" Jul 7 00:12:43.080479 containerd[1622]: time="2025-07-07T00:12:43.080423096Z" level=info msg="StartContainer for \"cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31\"" Jul 7 00:12:43.232553 containerd[1622]: time="2025-07-07T00:12:43.232396716Z" level=info msg="StartContainer for \"cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31\" returns successfully" Jul 7 00:12:43.330849 kubelet[2934]: I0707 00:12:43.329732 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65849599bc-hq8xg" podStartSLOduration=28.776552276 podStartE2EDuration="39.329711074s" podCreationTimestamp="2025-07-07 00:12:04 +0000 UTC" firstStartedPulling="2025-07-07 00:12:32.367257526 +0000 UTC m=+46.822792689" lastFinishedPulling="2025-07-07 00:12:42.920416303 +0000 UTC m=+57.375951487" observedRunningTime="2025-07-07 00:12:43.327087455 +0000 UTC m=+57.782622638" watchObservedRunningTime="2025-07-07 00:12:43.329711074 +0000 UTC m=+57.785246237" Jul 7 00:12:43.330849 kubelet[2934]: I0707 00:12:43.329842 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f5f6d488b-6gv5w" podStartSLOduration=5.021512726 podStartE2EDuration="15.329836659s" podCreationTimestamp="2025-07-07 00:12:28 +0000 UTC" firstStartedPulling="2025-07-07 00:12:28.938643159 +0000 UTC m=+43.394178322" lastFinishedPulling="2025-07-07 00:12:39.246967082 +0000 UTC m=+53.702502255" observedRunningTime="2025-07-07 00:12:40.29788162 +0000 UTC m=+54.753416793" watchObservedRunningTime="2025-07-07 00:12:43.329836659 +0000 UTC m=+57.785371823" Jul 7 00:12:43.955788 systemd[1]: run-containerd-runc-k8s.io-cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31-runc.DaJjw1.mount: Deactivated successfully. Jul 7 00:12:45.962180 containerd[1622]: time="2025-07-07T00:12:45.962126494Z" level=info msg="StopPodSandbox for \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\"" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.309 [WARNING][5577] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.313 [INFO][5577] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.313 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" iface="eth0" netns="" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.313 [INFO][5577] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.313 [INFO][5577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.561 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.563 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.565 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.589 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.589 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.595 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:46.604517 containerd[1622]: 2025-07-07 00:12:46.598 [INFO][5577] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:46.604517 containerd[1622]: time="2025-07-07T00:12:46.604461858Z" level=info msg="TearDown network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\" successfully" Jul 7 00:12:46.604517 containerd[1622]: time="2025-07-07T00:12:46.604488558Z" level=info msg="StopPodSandbox for \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\" returns successfully" Jul 7 00:12:46.734918 containerd[1622]: time="2025-07-07T00:12:46.734600709Z" level=info msg="RemovePodSandbox for \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\"" Jul 7 00:12:46.736691 containerd[1622]: time="2025-07-07T00:12:46.736652526Z" level=info msg="Forcibly stopping sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\"" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.787 [WARNING][5606] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" WorkloadEndpoint="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.787 [INFO][5606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.787 [INFO][5606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" iface="eth0" netns="" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.787 [INFO][5606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.787 [INFO][5606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.810 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.811 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.811 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.816 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.816 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" HandleID="k8s-pod-network.4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Workload="ci--4081--3--4--d--d476fda7c5-k8s-whisker--546c4689d--8cdxp-eth0" Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.817 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:46.823622 containerd[1622]: 2025-07-07 00:12:46.821 [INFO][5606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a" Jul 7 00:12:46.823622 containerd[1622]: time="2025-07-07T00:12:46.823611238Z" level=info msg="TearDown network for sandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\" successfully" Jul 7 00:12:46.836246 containerd[1622]: time="2025-07-07T00:12:46.836187397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:46.841326 containerd[1622]: time="2025-07-07T00:12:46.841256872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:12:46.842583 containerd[1622]: time="2025-07-07T00:12:46.842254322Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:46.858450 containerd[1622]: time="2025-07-07T00:12:46.855381545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:12:46.864600 containerd[1622]: time="2025-07-07T00:12:46.864557108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.943816507s" Jul 7 00:12:46.865491 containerd[1622]: time="2025-07-07T00:12:46.864753256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:12:46.881848 containerd[1622]: time="2025-07-07T00:12:46.856709364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:46.882622 containerd[1622]: time="2025-07-07T00:12:46.882575100Z" level=info msg="RemovePodSandbox \"4e682a50b7d4e2a76da3015af7517f1d6085d17845ea246c8a0a88096305308a\" returns successfully" Jul 7 00:12:46.965730 containerd[1622]: time="2025-07-07T00:12:46.965570471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:12:47.000710 containerd[1622]: time="2025-07-07T00:12:47.000304886Z" level=info msg="StopPodSandbox for \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\"" Jul 7 00:12:47.029082 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:12:47.021274 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:12:47.021319 systemd-resolved[1515]: Flushed all caches. Jul 7 00:12:47.063701 containerd[1622]: time="2025-07-07T00:12:47.063517295Z" level=info msg="CreateContainer within sandbox \"808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.049 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d6e879d5-3fea-4271-b519-f5824551a918", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5", Pod:"goldmane-58fd7646b9-98rfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36da9362fc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.050 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.050 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" iface="eth0" netns="" Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.051 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.051 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.085 [INFO][5641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.085 [INFO][5641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.085 [INFO][5641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.095 [WARNING][5641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.095 [INFO][5641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.097 [INFO][5641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:47.118439 containerd[1622]: 2025-07-07 00:12:47.104 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:47.118439 containerd[1622]: time="2025-07-07T00:12:47.117817704Z" level=info msg="TearDown network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\" successfully" Jul 7 00:12:47.118439 containerd[1622]: time="2025-07-07T00:12:47.117846398Z" level=info msg="StopPodSandbox for \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\" returns successfully" Jul 7 00:12:47.124715 containerd[1622]: time="2025-07-07T00:12:47.119325922Z" level=info msg="RemovePodSandbox for \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\"" Jul 7 00:12:47.124715 containerd[1622]: time="2025-07-07T00:12:47.119357591Z" level=info msg="Forcibly stopping sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\"" Jul 7 00:12:47.126402 containerd[1622]: time="2025-07-07T00:12:47.126231699Z" level=info msg="CreateContainer within sandbox \"808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc14daf216e094f33b517fd336ac8542624deeedaf61b22c7df7b6753e7727ae\"" Jul 7 00:12:47.135121 containerd[1622]: time="2025-07-07T00:12:47.135081472Z" level=info msg="StartContainer for \"cc14daf216e094f33b517fd336ac8542624deeedaf61b22c7df7b6753e7727ae\"" Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.167 [WARNING][5655] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d6e879d5-3fea-4271-b519-f5824551a918", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"73cb1e2be3fe96547982a57898f4c421e06953cbbf297c0724ac25bb0466a6b5", Pod:"goldmane-58fd7646b9-98rfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali36da9362fc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.167 [INFO][5655] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.167 [INFO][5655] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" iface="eth0" netns="" Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.167 [INFO][5655] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.167 [INFO][5655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.198 [INFO][5667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.199 [INFO][5667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.199 [INFO][5667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.205 [WARNING][5667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.205 [INFO][5667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" HandleID="k8s-pod-network.04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Workload="ci--4081--3--4--d--d476fda7c5-k8s-goldmane--58fd7646b9--98rfd-eth0" Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.207 [INFO][5667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:47.212737 containerd[1622]: 2025-07-07 00:12:47.209 [INFO][5655] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf" Jul 7 00:12:47.212737 containerd[1622]: time="2025-07-07T00:12:47.212787145Z" level=info msg="TearDown network for sandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\" successfully" Jul 7 00:12:47.227139 containerd[1622]: time="2025-07-07T00:12:47.226997559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:12:47.229219 containerd[1622]: time="2025-07-07T00:12:47.228867654Z" level=info msg="RemovePodSandbox \"04f781e1775787fa33bcc9a3c065f7051e431f756b8700c9f8cd828c912446bf\" returns successfully" Jul 7 00:12:47.229477 containerd[1622]: time="2025-07-07T00:12:47.229449265Z" level=info msg="StopPodSandbox for \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\"" Jul 7 00:12:47.334058 containerd[1622]: time="2025-07-07T00:12:47.333332209Z" level=info msg="StartContainer for \"cc14daf216e094f33b517fd336ac8542624deeedaf61b22c7df7b6753e7727ae\" returns successfully" Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.287 [WARNING][5691] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e3c9458d-3506-4ae4-801e-b2be3c20a3f6", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f", Pod:"coredns-7c65d6cfc9-b8gzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10aef6d942f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.288 [INFO][5691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.288 [INFO][5691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" iface="eth0" netns="" Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.288 [INFO][5691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.288 [INFO][5691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.348 [INFO][5708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.350 [INFO][5708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.350 [INFO][5708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.359 [WARNING][5708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.359 [INFO][5708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.362 [INFO][5708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:47.367850 containerd[1622]: 2025-07-07 00:12:47.364 [INFO][5691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:47.367850 containerd[1622]: time="2025-07-07T00:12:47.366815958Z" level=info msg="TearDown network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\" successfully" Jul 7 00:12:47.367850 containerd[1622]: time="2025-07-07T00:12:47.366841125Z" level=info msg="StopPodSandbox for \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\" returns successfully" Jul 7 00:12:47.424319 containerd[1622]: time="2025-07-07T00:12:47.423011580Z" level=info msg="RemovePodSandbox for \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\"" Jul 7 00:12:47.424319 containerd[1622]: time="2025-07-07T00:12:47.423044903Z" level=info msg="Forcibly stopping sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\"" Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.470 [WARNING][5736] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e3c9458d-3506-4ae4-801e-b2be3c20a3f6", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"10d4460142ee1269584176b707b1efd5c3ab80ec8dd575e642fc42f5bdb7384f", Pod:"coredns-7c65d6cfc9-b8gzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10aef6d942f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.472 [INFO][5736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.472 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" iface="eth0" netns="" Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.472 [INFO][5736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.472 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.528 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.529 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.530 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.539 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.539 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" HandleID="k8s-pod-network.9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--b8gzw-eth0" Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.540 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:47.554216 containerd[1622]: 2025-07-07 00:12:47.549 [INFO][5736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df" Jul 7 00:12:47.554216 containerd[1622]: time="2025-07-07T00:12:47.553072435Z" level=info msg="TearDown network for sandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\" successfully" Jul 7 00:12:47.567037 containerd[1622]: time="2025-07-07T00:12:47.566309895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:12:47.567037 containerd[1622]: time="2025-07-07T00:12:47.566385777Z" level=info msg="RemovePodSandbox \"9e8e42cfc0c0d20e4d1dcba475eac9946dcd855985e570c96763b4d567af80df\" returns successfully" Jul 7 00:12:47.590871 containerd[1622]: time="2025-07-07T00:12:47.590831762Z" level=info msg="StopPodSandbox for \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\"" Jul 7 00:12:47.676733 kubelet[2934]: I0707 00:12:47.668737 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-768db8c477-p9bbs" podStartSLOduration=33.040405957 podStartE2EDuration="46.657472749s" podCreationTimestamp="2025-07-07 00:12:01 +0000 UTC" firstStartedPulling="2025-07-07 00:12:33.266241663 +0000 UTC m=+47.721776826" lastFinishedPulling="2025-07-07 00:12:46.883308445 +0000 UTC m=+61.338843618" observedRunningTime="2025-07-07 00:12:47.632333484 +0000 UTC m=+62.087868647" watchObservedRunningTime="2025-07-07 00:12:47.657472749 +0000 UTC m=+62.113007912" Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.682 [WARNING][5758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0", GenerateName:"calico-kube-controllers-65849599bc-", Namespace:"calico-system", SelfLink:"", UID:"91dc6538-4119-44ef-989f-134b70575b3d", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65849599bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238", Pod:"calico-kube-controllers-65849599bc-hq8xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb91de1cda4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.687 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.687 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" iface="eth0" netns="" Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.688 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.689 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.737 [INFO][5766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.737 [INFO][5766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.738 [INFO][5766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.744 [WARNING][5766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.744 [INFO][5766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.747 [INFO][5766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:47.753556 containerd[1622]: 2025-07-07 00:12:47.751 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:47.753556 containerd[1622]: time="2025-07-07T00:12:47.753395698Z" level=info msg="TearDown network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\" successfully" Jul 7 00:12:47.753556 containerd[1622]: time="2025-07-07T00:12:47.753434400Z" level=info msg="StopPodSandbox for \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\" returns successfully" Jul 7 00:12:47.755782 containerd[1622]: time="2025-07-07T00:12:47.754389751Z" level=info msg="RemovePodSandbox for \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\"" Jul 7 00:12:47.755782 containerd[1622]: time="2025-07-07T00:12:47.754411812Z" level=info msg="Forcibly stopping sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\"" Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.791 [WARNING][5781] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0", GenerateName:"calico-kube-controllers-65849599bc-", Namespace:"calico-system", SelfLink:"", UID:"91dc6538-4119-44ef-989f-134b70575b3d", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65849599bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"6e35299e322aa90077c14ce0f6918cc2c4517801e939ab0b3ae4eae784459238", Pod:"calico-kube-controllers-65849599bc-hq8xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb91de1cda4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.791 [INFO][5781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.791 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" iface="eth0" netns="" Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.791 [INFO][5781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.791 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.819 [INFO][5788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.819 [INFO][5788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.819 [INFO][5788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.825 [WARNING][5788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.825 [INFO][5788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" HandleID="k8s-pod-network.3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--kube--controllers--65849599bc--hq8xg-eth0" Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.826 [INFO][5788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:47.830411 containerd[1622]: 2025-07-07 00:12:47.828 [INFO][5781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0" Jul 7 00:12:47.832632 containerd[1622]: time="2025-07-07T00:12:47.830628604Z" level=info msg="TearDown network for sandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\" successfully" Jul 7 00:12:47.836393 containerd[1622]: time="2025-07-07T00:12:47.836359459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:12:47.836531 containerd[1622]: time="2025-07-07T00:12:47.836516303Z" level=info msg="RemovePodSandbox \"3461e12ff7aa23285dcf3fb1ce54b5c45e6a5ffe9b21551f3214deac8ba439e0\" returns successfully" Jul 7 00:12:47.837219 containerd[1622]: time="2025-07-07T00:12:47.837208050Z" level=info msg="StopPodSandbox for \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\"" Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.880 [WARNING][5804] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87d7b550-b844-4390-a08c-837789bc924f", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5", Pod:"csi-node-driver-w2vh4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali322116de16d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.881 [INFO][5804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.881 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" iface="eth0" netns="" Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.881 [INFO][5804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.881 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.923 [INFO][5812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.924 [INFO][5812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.924 [INFO][5812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.931 [WARNING][5812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.931 [INFO][5812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.933 [INFO][5812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:47.939848 containerd[1622]: 2025-07-07 00:12:47.937 [INFO][5804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:47.942495 containerd[1622]: time="2025-07-07T00:12:47.940082331Z" level=info msg="TearDown network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\" successfully" Jul 7 00:12:47.942495 containerd[1622]: time="2025-07-07T00:12:47.940120073Z" level=info msg="StopPodSandbox for \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\" returns successfully" Jul 7 00:12:47.942495 containerd[1622]: time="2025-07-07T00:12:47.941525278Z" level=info msg="RemovePodSandbox for \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\"" Jul 7 00:12:47.942495 containerd[1622]: time="2025-07-07T00:12:47.941587785Z" level=info msg="Forcibly stopping sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\"" Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:47.986 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87d7b550-b844-4390-a08c-837789bc924f", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5", Pod:"csi-node-driver-w2vh4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali322116de16d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:47.987 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:47.987 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" iface="eth0" netns="" Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:47.987 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:47.987 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:48.019 [INFO][5833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:48.019 [INFO][5833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:48.020 [INFO][5833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:48.027 [WARNING][5833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:48.027 [INFO][5833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" HandleID="k8s-pod-network.2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Workload="ci--4081--3--4--d--d476fda7c5-k8s-csi--node--driver--w2vh4-eth0" Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:48.031 [INFO][5833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:48.050134 containerd[1622]: 2025-07-07 00:12:48.044 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518" Jul 7 00:12:48.050134 containerd[1622]: time="2025-07-07T00:12:48.050063659Z" level=info msg="TearDown network for sandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\" successfully" Jul 7 00:12:48.074021 containerd[1622]: time="2025-07-07T00:12:48.073971565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:12:48.074328 containerd[1622]: time="2025-07-07T00:12:48.074045474Z" level=info msg="RemovePodSandbox \"2b4d2e3ecdbcd8799bbd8f9781e3ea21cb46323990184095c6cd0336ab3db518\" returns successfully" Jul 7 00:12:48.088177 containerd[1622]: time="2025-07-07T00:12:48.086987619Z" level=info msg="StopPodSandbox for \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\"" Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.143 [WARNING][5847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0", GenerateName:"calico-apiserver-768db8c477-", Namespace:"calico-apiserver", SelfLink:"", UID:"90e1f2ac-ab1d-4006-a494-a932bbfb41c8", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768db8c477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb", Pod:"calico-apiserver-768db8c477-gvl7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3e4f1af1e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.145 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.145 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" iface="eth0" netns="" Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.145 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.145 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.183 [INFO][5854] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.183 [INFO][5854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.183 [INFO][5854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.188 [WARNING][5854] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.189 [INFO][5854] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.190 [INFO][5854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:48.197645 containerd[1622]: 2025-07-07 00:12:48.195 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:48.199697 containerd[1622]: time="2025-07-07T00:12:48.198275967Z" level=info msg="TearDown network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\" successfully" Jul 7 00:12:48.199697 containerd[1622]: time="2025-07-07T00:12:48.198319649Z" level=info msg="StopPodSandbox for \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\" returns successfully" Jul 7 00:12:48.199697 containerd[1622]: time="2025-07-07T00:12:48.198950121Z" level=info msg="RemovePodSandbox for \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\"" Jul 7 00:12:48.199697 containerd[1622]: time="2025-07-07T00:12:48.198972824Z" level=info msg="Forcibly stopping sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\"" Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.246 [WARNING][5868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0", GenerateName:"calico-apiserver-768db8c477-", Namespace:"calico-apiserver", SelfLink:"", UID:"90e1f2ac-ab1d-4006-a494-a932bbfb41c8", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768db8c477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb", Pod:"calico-apiserver-768db8c477-gvl7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3e4f1af1e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.247 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.247 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" iface="eth0" netns="" Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.247 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.247 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.276 [INFO][5875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.276 [INFO][5875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.276 [INFO][5875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.283 [WARNING][5875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.283 [INFO][5875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" HandleID="k8s-pod-network.333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--gvl7v-eth0" Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.284 [INFO][5875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:48.289304 containerd[1622]: 2025-07-07 00:12:48.286 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901" Jul 7 00:12:48.290357 containerd[1622]: time="2025-07-07T00:12:48.289341788Z" level=info msg="TearDown network for sandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\" successfully" Jul 7 00:12:48.298198 containerd[1622]: time="2025-07-07T00:12:48.298155424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:12:48.299398 containerd[1622]: time="2025-07-07T00:12:48.299001290Z" level=info msg="RemovePodSandbox \"333a698d11ba35e24bd8d9ed4230cba825e6e975e4386d022396d5084e947901\" returns successfully" Jul 7 00:12:48.301825 containerd[1622]: time="2025-07-07T00:12:48.300571854Z" level=info msg="StopPodSandbox for \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\"" Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.341 [WARNING][5890] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0", GenerateName:"calico-apiserver-768db8c477-", Namespace:"calico-apiserver", SelfLink:"", UID:"e655fc02-7710-4160-9f19-efce05fe6db5", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768db8c477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b", Pod:"calico-apiserver-768db8c477-p9bbs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e87166f44a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.341 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.341 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" iface="eth0" netns="" Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.341 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.341 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.360 [INFO][5897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.361 [INFO][5897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.361 [INFO][5897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.370 [WARNING][5897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.370 [INFO][5897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.372 [INFO][5897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:48.377140 containerd[1622]: 2025-07-07 00:12:48.374 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:48.378462 containerd[1622]: time="2025-07-07T00:12:48.377165001Z" level=info msg="TearDown network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\" successfully" Jul 7 00:12:48.378462 containerd[1622]: time="2025-07-07T00:12:48.377191160Z" level=info msg="StopPodSandbox for \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\" returns successfully" Jul 7 00:12:48.378462 containerd[1622]: time="2025-07-07T00:12:48.377586151Z" level=info msg="RemovePodSandbox for \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\"" Jul 7 00:12:48.378462 containerd[1622]: time="2025-07-07T00:12:48.377607360Z" level=info msg="Forcibly stopping sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\"" Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.419 [WARNING][5912] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0", GenerateName:"calico-apiserver-768db8c477-", Namespace:"calico-apiserver", SelfLink:"", UID:"e655fc02-7710-4160-9f19-efce05fe6db5", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768db8c477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"808a1b22b7d9203032c039ee3c6549a424b0a057134bbdc9ec920ee146e6e38b", Pod:"calico-apiserver-768db8c477-p9bbs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e87166f44a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.419 [INFO][5912] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.419 [INFO][5912] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" iface="eth0" netns="" Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.419 [INFO][5912] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.419 [INFO][5912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.493 [INFO][5919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.494 [INFO][5919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.494 [INFO][5919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.524 [WARNING][5919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.528 [INFO][5919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" HandleID="k8s-pod-network.7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Workload="ci--4081--3--4--d--d476fda7c5-k8s-calico--apiserver--768db8c477--p9bbs-eth0" Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.534 [INFO][5919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:48.552464 containerd[1622]: 2025-07-07 00:12:48.546 [INFO][5912] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771" Jul 7 00:12:48.553775 containerd[1622]: time="2025-07-07T00:12:48.552544534Z" level=info msg="TearDown network for sandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\" successfully" Jul 7 00:12:48.562716 containerd[1622]: time="2025-07-07T00:12:48.562144593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:12:48.562716 containerd[1622]: time="2025-07-07T00:12:48.562211449Z" level=info msg="RemovePodSandbox \"7c43de694127a99eda1cdf74e269290cdf7eca3ec00350c0540f4b682285f771\" returns successfully" Jul 7 00:12:48.575852 containerd[1622]: time="2025-07-07T00:12:48.575805337Z" level=info msg="StopPodSandbox for \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\"" Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.650 [WARNING][5969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1b44121e-b0a3-4592-b84f-01e54dbb20d5", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c", Pod:"coredns-7c65d6cfc9-tz5tm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f925927565", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.650 [INFO][5969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.650 [INFO][5969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" iface="eth0" netns="" Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.650 [INFO][5969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.650 [INFO][5969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.673 [INFO][5984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.673 [INFO][5984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.673 [INFO][5984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.684 [WARNING][5984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.684 [INFO][5984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.687 [INFO][5984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:48.694155 containerd[1622]: 2025-07-07 00:12:48.690 [INFO][5969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:48.695760 containerd[1622]: time="2025-07-07T00:12:48.694498860Z" level=info msg="TearDown network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\" successfully" Jul 7 00:12:48.695760 containerd[1622]: time="2025-07-07T00:12:48.694524097Z" level=info msg="StopPodSandbox for \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\" returns successfully" Jul 7 00:12:48.695760 containerd[1622]: time="2025-07-07T00:12:48.695059160Z" level=info msg="RemovePodSandbox for \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\"" Jul 7 00:12:48.695760 containerd[1622]: time="2025-07-07T00:12:48.695110546Z" level=info msg="Forcibly stopping sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\"" Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.745 [WARNING][5998] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1b44121e-b0a3-4592-b84f-01e54dbb20d5", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-d-d476fda7c5", ContainerID:"f2f72e98ea5db722da834471d0f5ceba9418651a1eeb1e7d8be1e9cc3d897c9c", Pod:"coredns-7c65d6cfc9-tz5tm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f925927565", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.745 [INFO][5998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.745 [INFO][5998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" iface="eth0" netns="" Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.745 [INFO][5998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.745 [INFO][5998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.774 [INFO][6006] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.774 [INFO][6006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.774 [INFO][6006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.783 [WARNING][6006] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.783 [INFO][6006] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" HandleID="k8s-pod-network.2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Workload="ci--4081--3--4--d--d476fda7c5-k8s-coredns--7c65d6cfc9--tz5tm-eth0" Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.785 [INFO][6006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:12:48.801026 containerd[1622]: 2025-07-07 00:12:48.795 [INFO][5998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649" Jul 7 00:12:48.804463 containerd[1622]: time="2025-07-07T00:12:48.802315527Z" level=info msg="TearDown network for sandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\" successfully" Jul 7 00:12:48.814252 containerd[1622]: time="2025-07-07T00:12:48.814038577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:12:48.814252 containerd[1622]: time="2025-07-07T00:12:48.814099101Z" level=info msg="RemovePodSandbox \"2186c3b47dfa65902f1ad8d09718e0f6af8a3808acfba440ea07161ca6b04649\" returns successfully" Jul 7 00:12:49.070883 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:12:49.068484 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:12:49.068522 systemd-resolved[1515]: Flushed all caches. Jul 7 00:12:49.294506 containerd[1622]: time="2025-07-07T00:12:49.294426571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:49.296919 containerd[1622]: time="2025-07-07T00:12:49.296886192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:12:49.297430 containerd[1622]: time="2025-07-07T00:12:49.297394525Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:49.300523 containerd[1622]: time="2025-07-07T00:12:49.299991544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:49.301157 containerd[1622]: time="2025-07-07T00:12:49.300668825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.335050293s" Jul 7 00:12:49.301157 containerd[1622]: time="2025-07-07T00:12:49.300694373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:12:49.420616 containerd[1622]: time="2025-07-07T00:12:49.419029142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:12:49.466406 containerd[1622]: time="2025-07-07T00:12:49.466325729Z" level=info msg="CreateContainer within sandbox \"b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:12:49.512485 containerd[1622]: time="2025-07-07T00:12:49.512287114Z" level=info msg="CreateContainer within sandbox \"b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"689495dc25e0eeb32cf861b901609240dd6c0e58ad212da0607118ab8d2cf0c4\"" Jul 7 00:12:49.515636 containerd[1622]: time="2025-07-07T00:12:49.515524052Z" level=info msg="StartContainer for \"689495dc25e0eeb32cf861b901609240dd6c0e58ad212da0607118ab8d2cf0c4\"" Jul 7 00:12:49.614031 containerd[1622]: time="2025-07-07T00:12:49.613921200Z" level=info msg="StartContainer for \"689495dc25e0eeb32cf861b901609240dd6c0e58ad212da0607118ab8d2cf0c4\" returns successfully" Jul 7 00:12:49.942466 containerd[1622]: time="2025-07-07T00:12:49.941433180Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:49.943870 containerd[1622]: time="2025-07-07T00:12:49.943828262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:12:49.946921 containerd[1622]: time="2025-07-07T00:12:49.946876417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 527.80163ms" Jul 7 00:12:49.947040 containerd[1622]: time="2025-07-07T00:12:49.947021198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:12:49.948733 containerd[1622]: time="2025-07-07T00:12:49.948714552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:12:49.950743 containerd[1622]: time="2025-07-07T00:12:49.950696419Z" level=info msg="CreateContainer within sandbox \"090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:12:49.974982 containerd[1622]: time="2025-07-07T00:12:49.974936949Z" level=info msg="CreateContainer within sandbox \"090bd48b8396f3fd1b18b50d8b5a80abea5c4bb4b6aaa4d5c2522d0bfb8b79eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"714217dfe5a74ace0fb407d345fc42bde132cbaff0f0081b4e130fc2440378fd\"" Jul 7 00:12:49.992230 containerd[1622]: time="2025-07-07T00:12:49.991544927Z" level=info msg="StartContainer for \"714217dfe5a74ace0fb407d345fc42bde132cbaff0f0081b4e130fc2440378fd\"" Jul 7 00:12:50.076118 containerd[1622]: time="2025-07-07T00:12:50.076015443Z" level=info msg="StartContainer for \"714217dfe5a74ace0fb407d345fc42bde132cbaff0f0081b4e130fc2440378fd\" returns successfully" Jul 7 00:12:50.820073 kubelet[2934]: I0707 00:12:50.807136 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-768db8c477-gvl7v" podStartSLOduration=34.164572833 podStartE2EDuration="49.78269757s" podCreationTimestamp="2025-07-07 00:12:01 +0000 UTC" firstStartedPulling="2025-07-07 00:12:34.330003177 +0000 UTC m=+48.785538339" lastFinishedPulling="2025-07-07 00:12:49.948127913 +0000 UTC m=+64.403663076" observedRunningTime="2025-07-07 00:12:50.780862609 +0000 UTC m=+65.236397813" watchObservedRunningTime="2025-07-07 00:12:50.78269757 +0000 UTC m=+65.238232763" Jul 7 00:12:51.121385 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:12:51.120010 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:12:51.120047 systemd-resolved[1515]: Flushed all caches. Jul 7 00:12:51.761888 kubelet[2934]: I0707 00:12:51.761814 2934 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:12:52.300669 containerd[1622]: time="2025-07-07T00:12:52.300604935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:52.302276 containerd[1622]: time="2025-07-07T00:12:52.302060824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:12:52.305214 containerd[1622]: time="2025-07-07T00:12:52.304175700Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:52.307028 containerd[1622]: time="2025-07-07T00:12:52.306971201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:12:52.307495 containerd[1622]: time="2025-07-07T00:12:52.307458704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.35862042s" Jul 7 00:12:52.307540 containerd[1622]: time="2025-07-07T00:12:52.307497067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:12:52.318675 containerd[1622]: time="2025-07-07T00:12:52.318640921Z" level=info msg="CreateContainer within sandbox \"b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:12:52.375778 containerd[1622]: time="2025-07-07T00:12:52.375697908Z" level=info msg="CreateContainer within sandbox \"b2d96f438683f360b7cd0eb1cc553685db9629c252406114897dd025f4fef5f5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e7437ce862ed0b4471377d336a6d777185e8d179b1774948a6913e5d933008f2\"" Jul 7 00:12:52.377941 containerd[1622]: time="2025-07-07T00:12:52.377183894Z" level=info msg="StartContainer for \"e7437ce862ed0b4471377d336a6d777185e8d179b1774948a6913e5d933008f2\"" Jul 7 00:12:52.462182 systemd[1]: run-containerd-runc-k8s.io-e7437ce862ed0b4471377d336a6d777185e8d179b1774948a6913e5d933008f2-runc.GHYZT8.mount: Deactivated successfully. Jul 7 00:12:52.496417 containerd[1622]: time="2025-07-07T00:12:52.496379357Z" level=info msg="StartContainer for \"e7437ce862ed0b4471377d336a6d777185e8d179b1774948a6913e5d933008f2\" returns successfully" Jul 7 00:12:52.820100 kubelet[2934]: I0707 00:12:52.819788 2934 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w2vh4" podStartSLOduration=30.700006144 podStartE2EDuration="48.819739203s" podCreationTimestamp="2025-07-07 00:12:04 +0000 UTC" firstStartedPulling="2025-07-07 00:12:34.192797414 +0000 UTC m=+48.648332578" lastFinishedPulling="2025-07-07 00:12:52.312530474 +0000 UTC m=+66.768065637" observedRunningTime="2025-07-07 00:12:52.802086716 +0000 UTC m=+67.257621919" watchObservedRunningTime="2025-07-07 00:12:52.819739203 +0000 UTC m=+67.275274396" Jul 7 00:12:53.126910 kubelet[2934]: I0707 00:12:53.117697 2934 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:12:53.130874 kubelet[2934]: I0707 00:12:53.130825 2934 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:12:53.165756 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:12:53.168108 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:12:53.165804 systemd-resolved[1515]: Flushed all caches. Jul 7 00:12:54.462351 systemd[1]: run-containerd-runc-k8s.io-cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31-runc.qveUXG.mount: Deactivated successfully. Jul 7 00:13:06.931441 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:13:06.927056 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:13:06.927073 systemd-resolved[1515]: Flushed all caches. Jul 7 00:13:18.485777 systemd[1]: run-containerd-runc-k8s.io-f2e9ab9c3b3e54780f9a772e7cc30eccffcf094843d610a0e94a6c9537d1b1af-runc.J4h5OS.mount: Deactivated successfully. Jul 7 00:13:48.001098 systemd[1]: sshd@8-157.180.40.234:22-14.103.116.0:48924.service: Deactivated successfully. Jul 7 00:14:04.726104 systemd[1]: run-containerd-runc-k8s.io-0dee5ffe822ca6cb85832200a4f4e2c8815a62cd2373cdf0fc4ebb4d63d5ca91-runc.cJFi76.mount: Deactivated successfully. Jul 7 00:14:34.712308 systemd[1]: run-containerd-runc-k8s.io-0dee5ffe822ca6cb85832200a4f4e2c8815a62cd2373cdf0fc4ebb4d63d5ca91-runc.L3s5v8.mount: Deactivated successfully. Jul 7 00:14:48.425310 systemd[1]: run-containerd-runc-k8s.io-cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31-runc.KA9WOi.mount: Deactivated successfully. Jul 7 00:14:49.328125 systemd[1]: Started sshd@10-157.180.40.234:22-49.232.53.248:36660.service - OpenSSH per-connection server daemon (49.232.53.248:36660). Jul 7 00:14:54.539020 sshd[6550]: Received disconnect from 49.232.53.248 port 36660:11: Bye Bye [preauth] Jul 7 00:14:54.539020 sshd[6550]: Disconnected from authenticating user root 49.232.53.248 port 36660 [preauth] Jul 7 00:14:54.542564 systemd[1]: sshd@10-157.180.40.234:22-49.232.53.248:36660.service: Deactivated successfully. Jul 7 00:15:18.418987 systemd[1]: run-containerd-runc-k8s.io-cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31-runc.71Sept.mount: Deactivated successfully. Jul 7 00:15:34.715322 systemd[1]: run-containerd-runc-k8s.io-0dee5ffe822ca6cb85832200a4f4e2c8815a62cd2373cdf0fc4ebb4d63d5ca91-runc.GVIpI6.mount: Deactivated successfully. Jul 7 00:15:54.396423 systemd[1]: run-containerd-runc-k8s.io-cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31-runc.McjYGz.mount: Deactivated successfully. Jul 7 00:16:40.711995 systemd[1]: Started sshd@11-157.180.40.234:22-147.75.109.163:48396.service - OpenSSH per-connection server daemon (147.75.109.163:48396). Jul 7 00:16:41.495096 systemd[1]: Started sshd@12-157.180.40.234:22-82.146.42.154:45474.service - OpenSSH per-connection server daemon (82.146.42.154:45474). Jul 7 00:16:41.629971 sshd[6888]: Connection closed by authenticating user root 82.146.42.154 port 45474 [preauth] Jul 7 00:16:41.633391 systemd[1]: sshd@12-157.180.40.234:22-82.146.42.154:45474.service: Deactivated successfully. Jul 7 00:16:41.772954 sshd[6886]: Accepted publickey for core from 147.75.109.163 port 48396 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:16:41.777057 sshd[6886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:41.808184 systemd-logind[1604]: New session 8 of user core. Jul 7 00:16:41.812200 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:16:43.361944 sshd[6886]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:43.371199 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:16:43.371889 systemd[1]: sshd@11-157.180.40.234:22-147.75.109.163:48396.service: Deactivated successfully. Jul 7 00:16:43.375077 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:16:43.376119 systemd-logind[1604]: Removed session 8. Jul 7 00:16:48.487400 systemd[1]: run-containerd-runc-k8s.io-f2e9ab9c3b3e54780f9a772e7cc30eccffcf094843d610a0e94a6c9537d1b1af-runc.OAPhOu.mount: Deactivated successfully. Jul 7 00:16:48.532345 systemd[1]: Started sshd@13-157.180.40.234:22-147.75.109.163:52482.service - OpenSSH per-connection server daemon (147.75.109.163:52482). Jul 7 00:16:49.602541 sshd[6949]: Accepted publickey for core from 147.75.109.163 port 52482 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:16:49.607025 sshd[6949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:49.613334 systemd-logind[1604]: New session 9 of user core. Jul 7 00:16:49.618169 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:16:50.890853 sshd[6949]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:50.897047 systemd[1]: sshd@13-157.180.40.234:22-147.75.109.163:52482.service: Deactivated successfully. Jul 7 00:16:50.904336 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:16:50.904439 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:16:50.908429 systemd-logind[1604]: Removed session 9. Jul 7 00:16:56.058072 systemd[1]: Started sshd@14-157.180.40.234:22-147.75.109.163:52496.service - OpenSSH per-connection server daemon (147.75.109.163:52496). Jul 7 00:16:57.085889 sshd[6985]: Accepted publickey for core from 147.75.109.163 port 52496 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:16:57.088315 sshd[6985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:57.095691 systemd-logind[1604]: New session 10 of user core. Jul 7 00:16:57.101194 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:16:57.915793 sshd[6985]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:57.922024 systemd[1]: sshd@14-157.180.40.234:22-147.75.109.163:52496.service: Deactivated successfully. Jul 7 00:16:57.926860 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:16:57.928159 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:16:57.930087 systemd-logind[1604]: Removed session 10. Jul 7 00:16:58.091467 systemd[1]: Started sshd@15-157.180.40.234:22-147.75.109.163:32988.service - OpenSSH per-connection server daemon (147.75.109.163:32988). Jul 7 00:16:59.116383 sshd[7000]: Accepted publickey for core from 147.75.109.163 port 32988 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:16:59.118063 sshd[7000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:59.121939 systemd-logind[1604]: New session 11 of user core. Jul 7 00:16:59.129388 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:17:00.018687 sshd[7000]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:00.029045 systemd[1]: sshd@15-157.180.40.234:22-147.75.109.163:32988.service: Deactivated successfully. Jul 7 00:17:00.035106 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:17:00.035992 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:17:00.041372 systemd-logind[1604]: Removed session 11. Jul 7 00:17:00.188038 systemd[1]: Started sshd@16-157.180.40.234:22-147.75.109.163:32996.service - OpenSSH per-connection server daemon (147.75.109.163:32996). Jul 7 00:17:01.229373 sshd[7016]: Accepted publickey for core from 147.75.109.163 port 32996 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:17:01.231643 sshd[7016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:01.238837 systemd-logind[1604]: New session 12 of user core. Jul 7 00:17:01.243602 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:17:02.049049 sshd[7016]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:02.052354 systemd[1]: sshd@16-157.180.40.234:22-147.75.109.163:32996.service: Deactivated successfully. Jul 7 00:17:02.056837 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:17:02.058034 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:17:02.059614 systemd-logind[1604]: Removed session 12. Jul 7 00:17:04.725874 systemd[1]: run-containerd-runc-k8s.io-0dee5ffe822ca6cb85832200a4f4e2c8815a62cd2373cdf0fc4ebb4d63d5ca91-runc.KAzYuu.mount: Deactivated successfully. Jul 7 00:17:05.014034 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:17:05.004638 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:17:05.004649 systemd-resolved[1515]: Flushed all caches. Jul 7 00:17:07.052317 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:17:07.055326 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:17:07.052330 systemd-resolved[1515]: Flushed all caches. Jul 7 00:17:07.223353 systemd[1]: Started sshd@17-157.180.40.234:22-147.75.109.163:33484.service - OpenSSH per-connection server daemon (147.75.109.163:33484). Jul 7 00:17:08.296072 sshd[7072]: Accepted publickey for core from 147.75.109.163 port 33484 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:17:08.300630 sshd[7072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:08.308918 systemd-logind[1604]: New session 13 of user core. Jul 7 00:17:08.315156 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:17:09.318746 sshd[7072]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:09.323603 systemd[1]: sshd@17-157.180.40.234:22-147.75.109.163:33484.service: Deactivated successfully. Jul 7 00:17:09.330271 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:17:09.332131 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:17:09.334183 systemd-logind[1604]: Removed session 13. Jul 7 00:17:09.487071 systemd[1]: Started sshd@18-157.180.40.234:22-147.75.109.163:33488.service - OpenSSH per-connection server daemon (147.75.109.163:33488). Jul 7 00:17:10.515969 sshd[7086]: Accepted publickey for core from 147.75.109.163 port 33488 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:17:10.518538 sshd[7086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:10.526286 systemd-logind[1604]: New session 14 of user core. Jul 7 00:17:10.532541 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:17:11.602135 sshd[7086]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:11.610893 systemd[1]: sshd@18-157.180.40.234:22-147.75.109.163:33488.service: Deactivated successfully. Jul 7 00:17:11.619324 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:17:11.620120 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:17:11.622550 systemd-logind[1604]: Removed session 14. Jul 7 00:17:11.777045 systemd[1]: Started sshd@19-157.180.40.234:22-147.75.109.163:33504.service - OpenSSH per-connection server daemon (147.75.109.163:33504). Jul 7 00:17:12.818476 sshd[7098]: Accepted publickey for core from 147.75.109.163 port 33504 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:17:12.820854 sshd[7098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:12.830903 systemd-logind[1604]: New session 15 of user core. Jul 7 00:17:12.836107 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:17:16.917009 sshd[7098]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:16.978212 systemd[1]: sshd@19-157.180.40.234:22-147.75.109.163:33504.service: Deactivated successfully. Jul 7 00:17:16.990071 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:17:16.995734 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:17:17.016495 systemd-logind[1604]: Removed session 15. Jul 7 00:17:17.042000 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:17:17.046713 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:17:17.047423 systemd-resolved[1515]: Flushed all caches. Jul 7 00:17:17.091010 systemd[1]: Started sshd@20-157.180.40.234:22-147.75.109.163:35766.service - OpenSSH per-connection server daemon (147.75.109.163:35766). Jul 7 00:17:18.245457 sshd[7132]: Accepted publickey for core from 147.75.109.163 port 35766 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:17:18.267594 sshd[7132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:18.339581 systemd-logind[1604]: New session 16 of user core. Jul 7 00:17:18.343889 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:17:19.097869 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:17:19.102956 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:17:19.102966 systemd-resolved[1515]: Flushed all caches. Jul 7 00:17:19.275120 systemd[1]: run-containerd-runc-k8s.io-cca5085dfb3946169085d71f0f0e38f211661aa063c27770de230281ee6e3c31-runc.a4sOGT.mount: Deactivated successfully. Jul 7 00:17:20.479498 sshd[7132]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:20.514900 systemd[1]: sshd@20-157.180.40.234:22-147.75.109.163:35766.service: Deactivated successfully. Jul 7 00:17:20.527392 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:17:20.528549 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:17:20.536392 systemd-logind[1604]: Removed session 16. Jul 7 00:17:20.655601 systemd[1]: Started sshd@21-157.180.40.234:22-147.75.109.163:35770.service - OpenSSH per-connection server daemon (147.75.109.163:35770). Jul 7 00:17:21.138591 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:17:21.133954 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:17:21.133967 systemd-resolved[1515]: Flushed all caches. Jul 7 00:17:21.722034 sshd[7192]: Accepted publickey for core from 147.75.109.163 port 35770 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:17:21.725804 sshd[7192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:21.738824 systemd-logind[1604]: New session 17 of user core. Jul 7 00:17:21.744007 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:17:23.473261 sshd[7192]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:23.476831 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:17:23.477478 systemd[1]: sshd@21-157.180.40.234:22-147.75.109.163:35770.service: Deactivated successfully. Jul 7 00:17:23.485628 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:17:23.488267 systemd-logind[1604]: Removed session 17. Jul 7 00:17:28.648091 systemd[1]: Started sshd@22-157.180.40.234:22-147.75.109.163:35316.service - OpenSSH per-connection server daemon (147.75.109.163:35316). Jul 7 00:17:29.746096 sshd[7211]: Accepted publickey for core from 147.75.109.163 port 35316 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:17:29.751041 sshd[7211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:29.759479 systemd-logind[1604]: New session 18 of user core. Jul 7 00:17:29.764102 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:17:31.123491 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:17:31.125631 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:17:31.125644 systemd-resolved[1515]: Flushed all caches. Jul 7 00:17:31.184258 sshd[7211]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:31.189540 systemd[1]: sshd@22-157.180.40.234:22-147.75.109.163:35316.service: Deactivated successfully. Jul 7 00:17:31.195259 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:17:31.204450 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:17:31.205918 systemd-logind[1604]: Removed session 18. Jul 7 00:17:36.361795 systemd[1]: Started sshd@23-157.180.40.234:22-147.75.109.163:33164.service - OpenSSH per-connection server daemon (147.75.109.163:33164). Jul 7 00:17:37.483575 sshd[7246]: Accepted publickey for core from 147.75.109.163 port 33164 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:17:37.487515 sshd[7246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:37.498624 systemd-logind[1604]: New session 19 of user core. Jul 7 00:17:37.506187 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:17:39.218608 sshd[7246]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:39.225218 systemd[1]: sshd@23-157.180.40.234:22-147.75.109.163:33164.service: Deactivated successfully. Jul 7 00:17:39.230438 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:17:39.230477 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:17:39.236421 systemd-logind[1604]: Removed session 19. Jul 7 00:17:54.940535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad77b76c1cb620faa4d9160045f86c17a3d58dc75bbebf0af95a7485f8f75932-rootfs.mount: Deactivated successfully. Jul 7 00:17:55.000545 containerd[1622]: time="2025-07-07T00:17:54.969224415Z" level=info msg="shim disconnected" id=ad77b76c1cb620faa4d9160045f86c17a3d58dc75bbebf0af95a7485f8f75932 namespace=k8s.io Jul 7 00:17:55.000545 containerd[1622]: time="2025-07-07T00:17:55.000538427Z" level=warning msg="cleaning up after shim disconnected" id=ad77b76c1cb620faa4d9160045f86c17a3d58dc75bbebf0af95a7485f8f75932 namespace=k8s.io Jul 7 00:17:55.000545 containerd[1622]: time="2025-07-07T00:17:55.000556301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:17:55.274145 kubelet[2934]: E0707 00:17:55.268193 2934 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49206->10.0.0.2:2379: read: connection timed out" Jul 7 00:17:55.443149 containerd[1622]: time="2025-07-07T00:17:55.442145425Z" level=info msg="shim disconnected" id=7a03205ffbe632678317bbc69e5bdb4918dcdb09ae72c7d30a16dcc7800a28e7 namespace=k8s.io Jul 7 00:17:55.443149 containerd[1622]: time="2025-07-07T00:17:55.442288223Z" level=warning msg="cleaning up after shim disconnected" id=7a03205ffbe632678317bbc69e5bdb4918dcdb09ae72c7d30a16dcc7800a28e7 namespace=k8s.io Jul 7 00:17:55.443149 containerd[1622]: time="2025-07-07T00:17:55.442303852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:17:55.448555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a03205ffbe632678317bbc69e5bdb4918dcdb09ae72c7d30a16dcc7800a28e7-rootfs.mount: Deactivated successfully. Jul 7 00:17:56.061705 kubelet[2934]: I0707 00:17:56.061613 2934 scope.go:117] "RemoveContainer" containerID="ad77b76c1cb620faa4d9160045f86c17a3d58dc75bbebf0af95a7485f8f75932" Jul 7 00:17:56.067395 kubelet[2934]: I0707 00:17:56.067051 2934 scope.go:117] "RemoveContainer" containerID="7a03205ffbe632678317bbc69e5bdb4918dcdb09ae72c7d30a16dcc7800a28e7" Jul 7 00:17:56.196264 containerd[1622]: time="2025-07-07T00:17:56.196038785Z" level=info msg="CreateContainer within sandbox \"05fa00a7ba2aa5b4a68668a5004e6459282bafd60493348d81c35bd3479f5e92\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:17:56.220751 containerd[1622]: time="2025-07-07T00:17:56.220191749Z" level=info msg="CreateContainer within sandbox \"5a8cc73f6ba116eab540254e9b932b3be294525f8de81fe5974185f1e647d061\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 00:17:56.338177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195811467.mount: Deactivated successfully. Jul 7 00:17:56.345714 containerd[1622]: time="2025-07-07T00:17:56.345646775Z" level=info msg="CreateContainer within sandbox \"05fa00a7ba2aa5b4a68668a5004e6459282bafd60493348d81c35bd3479f5e92\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4a549cc74321569f05cf87866e2842f978385295c3afe5e99de8c489332990bc\"" Jul 7 00:17:56.346426 containerd[1622]: time="2025-07-07T00:17:56.346306730Z" level=info msg="StartContainer for \"4a549cc74321569f05cf87866e2842f978385295c3afe5e99de8c489332990bc\"" Jul 7 00:17:56.356043 containerd[1622]: time="2025-07-07T00:17:56.356000131Z" level=info msg="CreateContainer within sandbox \"5a8cc73f6ba116eab540254e9b932b3be294525f8de81fe5974185f1e647d061\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ed47f6421cc3f2680c44e82f0cc197394562fdc3bd91c48aacb9a465166fc1fe\"" Jul 7 00:17:56.356702 containerd[1622]: time="2025-07-07T00:17:56.356652912Z" level=info msg="StartContainer for \"ed47f6421cc3f2680c44e82f0cc197394562fdc3bd91c48aacb9a465166fc1fe\"" Jul 7 00:17:56.451395 containerd[1622]: time="2025-07-07T00:17:56.450764170Z" level=info msg="StartContainer for \"4a549cc74321569f05cf87866e2842f978385295c3afe5e99de8c489332990bc\" returns successfully" Jul 7 00:17:56.451603 containerd[1622]: time="2025-07-07T00:17:56.451583003Z" level=info msg="StartContainer for \"ed47f6421cc3f2680c44e82f0cc197394562fdc3bd91c48aacb9a465166fc1fe\" returns successfully" Jul 7 00:17:57.041634 systemd-journald[1172]: Under memory pressure, flushing caches. Jul 7 00:17:57.041094 systemd-resolved[1515]: Under memory pressure, flushing caches. Jul 7 00:17:57.041107 systemd-resolved[1515]: Flushed all caches.