Sep 13 00:09:33.850838 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:09:33.850858 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:33.850865 kernel: BIOS-provided physical RAM map: Sep 13 00:09:33.850870 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:09:33.850875 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:09:33.850879 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:09:33.850885 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 13 00:09:33.850889 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 13 00:09:33.850895 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:09:33.850899 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:09:33.850904 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:09:33.850908 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:09:33.850913 kernel: NX (Execute Disable) protection: active Sep 13 00:09:33.850918 kernel: APIC: Static calls initialized Sep 13 00:09:33.850925 kernel: SMBIOS 2.8 present. Sep 13 00:09:33.850930 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 13 00:09:33.850935 kernel: Hypervisor detected: KVM Sep 13 00:09:33.850939 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:09:33.850944 kernel: kvm-clock: using sched offset of 3008244727 cycles Sep 13 00:09:33.850950 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:09:33.850955 kernel: tsc: Detected 2445.406 MHz processor Sep 13 00:09:33.850960 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:09:33.850965 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:09:33.850972 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 13 00:09:33.850977 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:09:33.850982 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:09:33.850987 kernel: Using GB pages for direct mapping Sep 13 00:09:33.850992 kernel: ACPI: Early table checksum verification disabled Sep 13 00:09:33.850996 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 13 00:09:33.851001 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:33.851006 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:33.851011 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:33.851018 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 13 00:09:33.851023 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:33.851027 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:33.851033 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:33.851037 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:33.851043 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 13 00:09:33.851065 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 13 00:09:33.851071 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 13 00:09:33.851080 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 13 00:09:33.851086 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 13 00:09:33.851091 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 13 00:09:33.851097 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 13 00:09:33.851102 kernel: No NUMA configuration found Sep 13 00:09:33.851107 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 13 00:09:33.851114 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Sep 13 00:09:33.851119 kernel: Zone ranges: Sep 13 00:09:33.851125 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:09:33.851130 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 13 00:09:33.851135 kernel: Normal empty Sep 13 00:09:33.851140 kernel: Movable zone start for each node Sep 13 00:09:33.851145 kernel: Early memory node ranges Sep 13 00:09:33.851151 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:09:33.851156 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 13 00:09:33.851161 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 13 00:09:33.851168 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:09:33.851173 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:09:33.851178 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:09:33.851183 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:09:33.851189 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:09:33.851194 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:09:33.851199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:09:33.851204 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:09:33.851209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:09:33.851216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:09:33.851221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:09:33.851227 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:09:33.851232 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:09:33.851237 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:09:33.851242 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:09:33.851248 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:09:33.851253 kernel: Booting paravirtualized kernel on KVM Sep 13 00:09:33.851258 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:09:33.851265 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:09:33.851270 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:09:33.851275 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:09:33.851281 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:09:33.851286 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:09:33.851292 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:33.851298 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:09:33.851303 kernel: random: crng init done Sep 13 00:09:33.851309 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:09:33.851315 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:09:33.851320 kernel: Fallback order for Node 0: 0 Sep 13 00:09:33.851325 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Sep 13 00:09:33.851330 kernel: Policy zone: DMA32 Sep 13 00:09:33.851335 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:09:33.851341 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125152K reserved, 0K cma-reserved) Sep 13 00:09:33.851347 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:09:33.851352 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:09:33.851359 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:09:33.851364 kernel: Dynamic Preempt: voluntary Sep 13 00:09:33.851369 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:09:33.851375 kernel: rcu: RCU event tracing is enabled. Sep 13 00:09:33.851381 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:09:33.851386 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:09:33.851392 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:09:33.851397 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:09:33.851402 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:09:33.851408 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:09:33.851414 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:09:33.851419 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:09:33.851425 kernel: Console: colour VGA+ 80x25 Sep 13 00:09:33.851430 kernel: printk: console [tty0] enabled Sep 13 00:09:33.851435 kernel: printk: console [ttyS0] enabled Sep 13 00:09:33.851440 kernel: ACPI: Core revision 20230628 Sep 13 00:09:33.851446 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:09:33.851451 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:09:33.851456 kernel: x2apic enabled Sep 13 00:09:33.851463 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:09:33.851468 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:09:33.851473 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:09:33.851479 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Sep 13 00:09:33.851484 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:09:33.851489 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:09:33.851494 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:09:33.851500 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:09:33.851511 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:09:33.851516 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:09:33.851522 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:09:33.851527 kernel: active return thunk: retbleed_return_thunk Sep 13 00:09:33.851534 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:09:33.851540 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:09:33.851545 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:09:33.851551 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:09:33.851557 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:09:33.851564 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:09:33.851570 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:09:33.851576 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:09:33.851582 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:09:33.851587 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:09:33.851593 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:09:33.851598 kernel: landlock: Up and running. Sep 13 00:09:33.851604 kernel: SELinux: Initializing. Sep 13 00:09:33.851611 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:09:33.851616 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:09:33.851622 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:09:33.851628 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:33.851634 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:33.851639 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:33.851645 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:09:33.851651 kernel: ... version: 0 Sep 13 00:09:33.851656 kernel: ... bit width: 48 Sep 13 00:09:33.851663 kernel: ... generic registers: 6 Sep 13 00:09:33.851669 kernel: ... value mask: 0000ffffffffffff Sep 13 00:09:33.851674 kernel: ... max period: 00007fffffffffff Sep 13 00:09:33.851680 kernel: ... fixed-purpose events: 0 Sep 13 00:09:33.851685 kernel: ... event mask: 000000000000003f Sep 13 00:09:33.851691 kernel: signal: max sigframe size: 1776 Sep 13 00:09:33.851696 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:09:33.851702 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:09:33.851707 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:09:33.851714 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:09:33.851720 kernel: .... node #0, CPUs: #1 Sep 13 00:09:33.851725 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:09:33.851730 kernel: smpboot: Max logical packages: 1 Sep 13 00:09:33.851736 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Sep 13 00:09:33.851742 kernel: devtmpfs: initialized Sep 13 00:09:33.851747 kernel: x86/mm: Memory block size: 128MB Sep 13 00:09:33.851753 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:09:33.851758 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:09:33.851765 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:09:33.851771 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:09:33.851776 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:09:33.851782 kernel: audit: type=2000 audit(1757722172.987:1): state=initialized audit_enabled=0 res=1 Sep 13 00:09:33.851787 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:09:33.851793 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:09:33.851798 kernel: cpuidle: using governor menu Sep 13 00:09:33.851804 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:09:33.851820 kernel: dca service started, version 1.12.1 Sep 13 00:09:33.851827 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:09:33.851833 kernel: PCI: Using configuration type 1 for base access Sep 13 00:09:33.851839 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:09:33.851844 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:09:33.851850 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:09:33.851856 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:09:33.851861 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:09:33.851867 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:09:33.851872 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:09:33.851879 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:09:33.851884 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:09:33.851890 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:09:33.851895 kernel: ACPI: Interpreter enabled Sep 13 00:09:33.851901 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:09:33.851906 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:09:33.851912 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:09:33.851918 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:09:33.851923 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:09:33.851930 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:09:33.852039 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:09:33.852365 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:09:33.852433 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:09:33.852442 kernel: PCI host bridge to bus 0000:00 Sep 13 00:09:33.852510 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:09:33.852570 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:09:33.852632 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:09:33.852688 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 13 00:09:33.852743 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:09:33.852797 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:09:33.852868 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:09:33.852945 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:09:33.853024 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:09:33.853115 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Sep 13 00:09:33.853180 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Sep 13 00:09:33.853244 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Sep 13 00:09:33.853307 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Sep 13 00:09:33.853370 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:09:33.853444 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.853514 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Sep 13 00:09:33.853584 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.853646 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Sep 13 00:09:33.853716 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.853780 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Sep 13 00:09:33.853863 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.853938 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Sep 13 00:09:33.854008 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.854133 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Sep 13 00:09:33.854210 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.854274 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Sep 13 00:09:33.854365 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.854434 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Sep 13 00:09:33.854503 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.854566 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Sep 13 00:09:33.854634 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:09:33.854696 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Sep 13 00:09:33.854768 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:09:33.854855 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:09:33.854925 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:09:33.854998 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Sep 13 00:09:33.855117 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Sep 13 00:09:33.855194 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:09:33.855261 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:09:33.855339 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:09:33.855412 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Sep 13 00:09:33.855480 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 13 00:09:33.855588 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Sep 13 00:09:33.855660 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:09:33.855723 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 00:09:33.855786 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:09:33.855878 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:09:33.855956 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Sep 13 00:09:33.856020 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:09:33.856104 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 00:09:33.856167 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:09:33.856239 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:09:33.856305 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Sep 13 00:09:33.856375 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 13 00:09:33.856438 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:09:33.856500 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 00:09:33.856561 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:09:33.856631 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:09:33.856696 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 13 00:09:33.856758 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:09:33.856836 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 00:09:33.856904 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:09:33.856974 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:09:33.857040 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Sep 13 00:09:33.857129 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:09:33.857192 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 00:09:33.857253 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:09:33.857326 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:09:33.857396 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Sep 13 00:09:33.857460 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Sep 13 00:09:33.857522 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:09:33.857584 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 00:09:33.857645 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:09:33.857653 kernel: acpiphp: Slot [0] registered Sep 13 00:09:33.857721 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:09:33.857854 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Sep 13 00:09:33.857963 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Sep 13 00:09:33.858081 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Sep 13 00:09:33.858175 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:09:33.858267 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 00:09:33.858363 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:09:33.858374 kernel: acpiphp: Slot [0-2] registered Sep 13 00:09:33.858466 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:09:33.858566 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 13 00:09:33.858660 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:09:33.858672 kernel: acpiphp: Slot [0-3] registered Sep 13 00:09:33.858740 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:09:33.858849 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 00:09:33.858919 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:09:33.858929 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:09:33.858935 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:09:33.858941 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:09:33.858951 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:09:33.858957 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:09:33.858962 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:09:33.858968 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:09:33.858974 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:09:33.858980 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:09:33.858985 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:09:33.858993 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:09:33.858999 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:09:33.859006 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:09:33.859012 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:09:33.859017 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:09:33.859023 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:09:33.859028 kernel: iommu: Default domain type: Translated Sep 13 00:09:33.859034 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:09:33.859040 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:09:33.859082 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:09:33.859090 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:09:33.859099 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 13 00:09:33.859174 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:09:33.859239 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:09:33.859303 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:09:33.859313 kernel: vgaarb: loaded Sep 13 00:09:33.859319 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:09:33.859326 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:09:33.859332 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:09:33.859337 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:09:33.859347 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:09:33.859353 kernel: pnp: PnP ACPI init Sep 13 00:09:33.859423 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:09:33.859433 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:09:33.859439 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:09:33.859445 kernel: NET: Registered PF_INET protocol family Sep 13 00:09:33.859451 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:09:33.859457 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:09:33.859466 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:09:33.859471 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:33.859477 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:09:33.859483 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:09:33.859489 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:09:33.859494 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:09:33.859500 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:09:33.859505 kernel: NET: Registered PF_XDP protocol family Sep 13 00:09:33.859572 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:09:33.859639 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:09:33.859705 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:09:33.859769 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:09:33.859859 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:09:33.859934 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:09:33.859998 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:09:33.860434 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 00:09:33.860509 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:09:33.860575 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:09:33.860676 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 00:09:33.860747 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:09:33.860826 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:09:33.860897 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 00:09:33.860962 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:09:33.861028 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:09:33.861117 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 00:09:33.861185 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:09:33.861251 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:09:33.861317 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 00:09:33.861381 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:09:33.861446 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:09:33.861518 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 00:09:33.861595 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:09:33.861665 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:09:33.861732 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 13 00:09:33.861802 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 00:09:33.861909 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:09:33.861976 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:09:33.862042 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 13 00:09:33.862130 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 13 00:09:33.862197 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:09:33.862262 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:09:33.862335 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 13 00:09:33.862451 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 00:09:33.862547 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:09:33.862622 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:09:33.862695 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:09:33.862755 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:09:33.862833 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 13 00:09:33.862908 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:09:33.862966 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:09:33.863913 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 13 00:09:33.863994 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 13 00:09:33.864085 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 13 00:09:33.864151 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 00:09:33.864216 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 13 00:09:33.864278 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 00:09:33.864345 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 13 00:09:33.864411 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 00:09:33.864479 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 13 00:09:33.864539 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 00:09:33.864606 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 13 00:09:33.864667 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 00:09:33.864734 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 13 00:09:33.864802 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 13 00:09:33.864880 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 00:09:33.864948 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 13 00:09:33.865011 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 13 00:09:33.866696 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 00:09:33.866772 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 13 00:09:33.866856 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 13 00:09:33.866915 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 00:09:33.866925 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:09:33.866931 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:09:33.866937 kernel: Initialise system trusted keyrings Sep 13 00:09:33.866944 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:09:33.866950 kernel: Key type asymmetric registered Sep 13 00:09:33.866956 kernel: Asymmetric key parser 'x509' registered Sep 13 00:09:33.866962 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:09:33.866971 kernel: io scheduler mq-deadline registered Sep 13 00:09:33.866977 kernel: io scheduler kyber registered Sep 13 00:09:33.866983 kernel: io scheduler bfq registered Sep 13 00:09:33.867066 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 13 00:09:33.867136 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 13 00:09:33.867282 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 13 00:09:33.867464 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 13 00:09:33.867538 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 13 00:09:33.867604 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 13 00:09:33.867674 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 13 00:09:33.867742 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 13 00:09:33.867805 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 13 00:09:33.867891 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 13 00:09:33.867956 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 13 00:09:33.868020 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 13 00:09:33.868466 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 13 00:09:33.868549 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 13 00:09:33.868621 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 13 00:09:33.868686 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 13 00:09:33.868696 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:09:33.868758 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 13 00:09:33.868839 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 13 00:09:33.868849 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:09:33.868856 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 13 00:09:33.868862 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:09:33.868871 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:09:33.868877 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:09:33.868883 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:09:33.868889 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:09:33.868895 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:09:33.868964 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:09:33.869024 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:09:33.870151 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:09:33 UTC (1757722173) Sep 13 00:09:33.870224 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:09:33.870234 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:09:33.870241 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:09:33.870247 kernel: Segment Routing with IPv6 Sep 13 00:09:33.870253 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:09:33.870259 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:09:33.870265 kernel: Key type dns_resolver registered Sep 13 00:09:33.870271 kernel: IPI shorthand broadcast: enabled Sep 13 00:09:33.870278 kernel: sched_clock: Marking stable (1060012171, 133649195)->(1203122046, -9460680) Sep 13 00:09:33.870286 kernel: registered taskstats version 1 Sep 13 00:09:33.870292 kernel: Loading compiled-in X.509 certificates Sep 13 00:09:33.870300 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:09:33.870306 kernel: Key type .fscrypt registered Sep 13 00:09:33.870312 kernel: Key type fscrypt-provisioning registered Sep 13 00:09:33.870318 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:09:33.870324 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:09:33.870330 kernel: ima: No architecture policies found Sep 13 00:09:33.870336 kernel: clk: Disabling unused clocks Sep 13 00:09:33.870344 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:09:33.870350 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:09:33.870356 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:09:33.870362 kernel: Run /init as init process Sep 13 00:09:33.870368 kernel: with arguments: Sep 13 00:09:33.870374 kernel: /init Sep 13 00:09:33.870380 kernel: with environment: Sep 13 00:09:33.870386 kernel: HOME=/ Sep 13 00:09:33.870392 kernel: TERM=linux Sep 13 00:09:33.870400 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:09:33.870408 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:33.870416 systemd[1]: Detected virtualization kvm. Sep 13 00:09:33.870423 systemd[1]: Detected architecture x86-64. Sep 13 00:09:33.870429 systemd[1]: Running in initrd. Sep 13 00:09:33.870435 systemd[1]: No hostname configured, using default hostname. Sep 13 00:09:33.870441 systemd[1]: Hostname set to . Sep 13 00:09:33.870449 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:09:33.870455 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:09:33.870462 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:33.870468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:33.870475 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:09:33.870481 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:33.870488 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:09:33.870494 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:09:33.870503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:09:33.870510 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:09:33.870516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:33.870523 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:33.870529 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:33.870535 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:33.870541 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:33.870549 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:33.870555 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:33.870562 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:33.870568 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:09:33.870575 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:09:33.870581 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:33.870587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:33.870594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:33.870600 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:33.870608 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:09:33.870614 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:33.870620 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:09:33.870627 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:09:33.870633 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:33.870639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:33.870645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:33.870664 systemd-journald[187]: Collecting audit messages is disabled. Sep 13 00:09:33.870684 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:33.870696 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:33.870714 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:09:33.870733 systemd-journald[187]: Journal started Sep 13 00:09:33.870755 systemd-journald[187]: Runtime Journal (/run/log/journal/d61c6918472946fa923d04be4795a20a) is 4.8M, max 38.4M, 33.6M free. Sep 13 00:09:33.875142 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:33.868217 systemd-modules-load[188]: Inserted module 'overlay' Sep 13 00:09:33.916689 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:09:33.916706 kernel: Bridge firewalling registered Sep 13 00:09:33.916715 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:33.893142 systemd-modules-load[188]: Inserted module 'br_netfilter' Sep 13 00:09:33.917389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:33.918364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:33.919506 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:33.925183 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:33.929183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:33.932679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:33.938144 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:33.941083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:33.943258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:33.945306 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:33.946613 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:33.963179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:09:33.966022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:33.972074 dracut-cmdline[220]: dracut-dracut-053 Sep 13 00:09:33.974040 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:33.991698 systemd-resolved[221]: Positive Trust Anchors: Sep 13 00:09:33.991711 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:33.991736 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:34.000007 systemd-resolved[221]: Defaulting to hostname 'linux'. Sep 13 00:09:34.000789 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:34.001501 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:34.022078 kernel: SCSI subsystem initialized Sep 13 00:09:34.030077 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:09:34.039076 kernel: iscsi: registered transport (tcp) Sep 13 00:09:34.055073 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:09:34.055101 kernel: QLogic iSCSI HBA Driver Sep 13 00:09:34.079877 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:34.084204 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:09:34.108333 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:09:34.108377 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:09:34.110073 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:09:34.149073 kernel: raid6: avx2x4 gen() 30676 MB/s Sep 13 00:09:34.166066 kernel: raid6: avx2x2 gen() 31126 MB/s Sep 13 00:09:34.183151 kernel: raid6: avx2x1 gen() 26683 MB/s Sep 13 00:09:34.183178 kernel: raid6: using algorithm avx2x2 gen() 31126 MB/s Sep 13 00:09:34.201246 kernel: raid6: .... xor() 32585 MB/s, rmw enabled Sep 13 00:09:34.201271 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:09:34.218072 kernel: xor: automatically using best checksumming function avx Sep 13 00:09:34.322079 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:09:34.329642 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:34.335228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:34.343781 systemd-udevd[404]: Using default interface naming scheme 'v255'. Sep 13 00:09:34.346538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:34.355196 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:09:34.363293 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Sep 13 00:09:34.382209 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:34.387189 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:34.420803 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:34.427192 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:09:34.438258 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:34.439225 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:34.439787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:34.442218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:34.450204 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:09:34.459022 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:34.484384 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:09:34.493890 kernel: ACPI: bus type USB registered Sep 13 00:09:34.493903 kernel: usbcore: registered new interface driver usbfs Sep 13 00:09:34.493911 kernel: usbcore: registered new interface driver hub Sep 13 00:09:34.497066 kernel: usbcore: registered new device driver usb Sep 13 00:09:34.500070 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:09:34.508069 kernel: libata version 3.00 loaded. Sep 13 00:09:34.514070 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:09:34.541332 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:09:34.541526 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:09:34.543716 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:09:34.547013 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:09:34.547166 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:09:34.566162 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:09:34.567478 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:34.596638 kernel: hub 1-0:1.0: USB hub found Sep 13 00:09:34.596796 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:09:34.596911 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:09:34.597043 kernel: hub 2-0:1.0: USB hub found Sep 13 00:09:34.597170 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:09:34.597264 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:09:34.597280 kernel: AES CTR mode by8 optimization enabled Sep 13 00:09:34.597294 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:09:34.597427 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:09:34.597442 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:09:34.597570 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:09:34.598464 kernel: scsi host1: ahci Sep 13 00:09:34.598560 kernel: scsi host2: ahci Sep 13 00:09:34.598642 kernel: scsi host3: ahci Sep 13 00:09:34.598718 kernel: scsi host4: ahci Sep 13 00:09:34.598792 kernel: scsi host5: ahci Sep 13 00:09:34.598885 kernel: scsi host6: ahci Sep 13 00:09:34.598964 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Sep 13 00:09:34.598976 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Sep 13 00:09:34.598985 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Sep 13 00:09:34.598992 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Sep 13 00:09:34.598999 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Sep 13 00:09:34.599006 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Sep 13 00:09:34.567586 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:34.596504 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:34.598234 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:34.598391 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:34.599275 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:34.610234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:34.651643 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:34.655181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:34.664598 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:34.810081 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:09:34.898310 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:09:34.898403 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 00:09:34.898429 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:09:34.898447 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:09:34.898481 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:09:34.899736 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:09:34.902339 kernel: ata1.00: applying bridge limits Sep 13 00:09:34.903735 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:09:34.904199 kernel: ata1.00: configured for UDMA/100 Sep 13 00:09:34.908075 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:09:34.933011 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 13 00:09:34.936295 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:09:34.936455 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:09:34.936602 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 13 00:09:34.936750 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:09:34.949022 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:09:34.949068 kernel: GPT:17805311 != 80003071 Sep 13 00:09:34.949080 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:09:34.949090 kernel: GPT:17805311 != 80003071 Sep 13 00:09:34.949098 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:09:34.949113 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:34.949122 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:09:34.950518 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:09:34.964880 kernel: usbcore: registered new interface driver usbhid Sep 13 00:09:34.964908 kernel: usbhid: USB HID core driver Sep 13 00:09:34.968318 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:09:34.968470 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:09:34.970068 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 13 00:09:34.977875 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:09:34.979071 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:09:34.990115 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (460) Sep 13 00:09:34.997068 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (468) Sep 13 00:09:34.999240 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:09:35.003956 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:09:35.009521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:09:35.016822 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:09:35.017607 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:09:35.024152 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:09:35.032069 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:35.033545 disk-uuid[579]: Primary Header is updated. Sep 13 00:09:35.033545 disk-uuid[579]: Secondary Entries is updated. Sep 13 00:09:35.033545 disk-uuid[579]: Secondary Header is updated. Sep 13 00:09:36.044109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:36.044704 disk-uuid[581]: The operation has completed successfully. Sep 13 00:09:36.092187 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:09:36.092277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:09:36.111167 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:09:36.113657 sh[601]: Success Sep 13 00:09:36.125067 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:09:36.161864 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:09:36.163464 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:09:36.168123 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:09:36.181602 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:09:36.181634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:36.181646 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:09:36.184482 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:09:36.184500 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:09:36.194096 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:09:36.195479 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:09:36.196443 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:09:36.202188 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:09:36.205186 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:09:36.218642 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:36.218689 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:36.218715 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:36.226198 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:36.226231 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:36.235598 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:09:36.238630 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:36.242907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:09:36.250408 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:09:36.281867 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:36.292192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:36.327398 ignition[725]: Ignition 2.19.0 Sep 13 00:09:36.327410 ignition[725]: Stage: fetch-offline Sep 13 00:09:36.327448 ignition[725]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:36.332256 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:36.327458 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:09:36.327564 ignition[725]: parsed url from cmdline: "" Sep 13 00:09:36.327568 ignition[725]: no config URL provided Sep 13 00:09:36.327575 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:36.327584 ignition[725]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:36.327592 ignition[725]: failed to fetch config: resource requires networking Sep 13 00:09:36.327821 ignition[725]: Ignition finished successfully Sep 13 00:09:36.337604 systemd-networkd[782]: lo: Link UP Sep 13 00:09:36.337617 systemd-networkd[782]: lo: Gained carrier Sep 13 00:09:36.339949 systemd-networkd[782]: Enumeration completed Sep 13 00:09:36.340030 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:36.340868 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:36.340873 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:36.340957 systemd[1]: Reached target network.target - Network. Sep 13 00:09:36.342131 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:36.342136 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:36.342752 systemd-networkd[782]: eth0: Link UP Sep 13 00:09:36.342756 systemd-networkd[782]: eth0: Gained carrier Sep 13 00:09:36.342765 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:36.349187 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:09:36.350541 systemd-networkd[782]: eth1: Link UP Sep 13 00:09:36.350952 systemd-networkd[782]: eth1: Gained carrier Sep 13 00:09:36.350966 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:36.364312 ignition[790]: Ignition 2.19.0 Sep 13 00:09:36.364327 ignition[790]: Stage: fetch Sep 13 00:09:36.364528 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:36.364540 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:09:36.364634 ignition[790]: parsed url from cmdline: "" Sep 13 00:09:36.364638 ignition[790]: no config URL provided Sep 13 00:09:36.364644 ignition[790]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:36.364651 ignition[790]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:36.364671 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:09:36.364830 ignition[790]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:09:36.389094 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:09:36.423114 systemd-networkd[782]: eth0: DHCPv4 address 65.21.60.153/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:09:36.565926 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:09:36.570597 ignition[790]: GET result: OK Sep 13 00:09:36.570701 ignition[790]: parsing config with SHA512: 49819502b1767e2197d8b2c82ce7df507b9e8516f690511356e59ea40ebde2960336c6a6165cfd5507c16e22ab251f3a6ddca04b0f9bd088398afc7986c5d219 Sep 13 00:09:36.574918 unknown[790]: fetched base config from "system" Sep 13 00:09:36.574933 unknown[790]: fetched base config from "system" Sep 13 00:09:36.575539 ignition[790]: fetch: fetch complete Sep 13 00:09:36.574943 unknown[790]: fetched user config from "hetzner" Sep 13 00:09:36.575546 ignition[790]: fetch: fetch passed Sep 13 00:09:36.577676 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:09:36.575598 ignition[790]: Ignition finished successfully Sep 13 00:09:36.584218 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:09:36.598542 ignition[797]: Ignition 2.19.0 Sep 13 00:09:36.598556 ignition[797]: Stage: kargs Sep 13 00:09:36.598785 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:36.598800 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:09:36.600257 ignition[797]: kargs: kargs passed Sep 13 00:09:36.600307 ignition[797]: Ignition finished successfully Sep 13 00:09:36.603307 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:09:36.610193 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:09:36.623199 ignition[803]: Ignition 2.19.0 Sep 13 00:09:36.623217 ignition[803]: Stage: disks Sep 13 00:09:36.623474 ignition[803]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:36.623490 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:09:36.625985 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:09:36.624981 ignition[803]: disks: disks passed Sep 13 00:09:36.631852 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:36.625035 ignition[803]: Ignition finished successfully Sep 13 00:09:36.634003 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:09:36.635400 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:36.636529 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:36.637883 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:36.644279 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:09:36.658535 systemd-fsck[811]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:09:36.661176 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:09:36.667190 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:09:36.739076 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:09:36.739484 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:09:36.740414 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:09:36.748120 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:36.750347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:09:36.753189 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:09:36.756136 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:09:36.757412 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:36.773188 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (819) Sep 13 00:09:36.773216 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:36.773225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:36.773233 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:36.773241 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:36.773249 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:36.763247 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:09:36.776246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:36.782181 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:09:36.829924 coreos-metadata[821]: Sep 13 00:09:36.829 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:09:36.832084 coreos-metadata[821]: Sep 13 00:09:36.831 INFO Fetch successful Sep 13 00:09:36.832084 coreos-metadata[821]: Sep 13 00:09:36.831 INFO wrote hostname ci-4081-3-5-n-662926fb9e to /sysroot/etc/hostname Sep 13 00:09:36.835322 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:09:36.832260 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:09:36.840795 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:09:36.844531 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:09:36.847938 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:09:36.922912 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:36.930194 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:09:36.933515 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:09:36.940086 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:36.961657 ignition[935]: INFO : Ignition 2.19.0 Sep 13 00:09:36.963601 ignition[935]: INFO : Stage: mount Sep 13 00:09:36.963601 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:36.963601 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:09:36.963750 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:09:36.966467 ignition[935]: INFO : mount: mount passed Sep 13 00:09:36.966467 ignition[935]: INFO : Ignition finished successfully Sep 13 00:09:36.967152 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:09:36.977180 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:09:37.179507 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:09:37.184207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:37.198093 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (947) Sep 13 00:09:37.201258 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:37.201308 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:37.203972 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:37.208439 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:09:37.208514 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:37.212093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:37.238189 ignition[963]: INFO : Ignition 2.19.0 Sep 13 00:09:37.239737 ignition[963]: INFO : Stage: files Sep 13 00:09:37.240602 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:37.242685 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:09:37.242685 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:09:37.244348 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:09:37.245301 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:09:37.248936 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:09:37.250177 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:09:37.251382 unknown[963]: wrote ssh authorized keys file for user: core Sep 13 00:09:37.252295 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:09:37.253480 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:09:37.254450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:09:37.254450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:09:37.254450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:09:37.446313 systemd-networkd[782]: eth1: Gained IPv6LL Sep 13 00:09:37.635062 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:09:38.079103 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:09:38.278188 systemd-networkd[782]: eth0: Gained IPv6LL Sep 13 00:09:38.456549 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:09:38.620589 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:09:38.620589 ignition[963]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:38.623405 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:38.623405 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:38.623405 ignition[963]: INFO : files: files passed Sep 13 00:09:38.623405 ignition[963]: INFO : Ignition finished successfully Sep 13 00:09:38.623597 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:09:38.632247 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:09:38.636011 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:09:38.637722 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:09:38.637791 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:09:38.646673 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:38.646673 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:38.648944 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:38.650419 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:38.651883 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:09:38.658299 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:09:38.672483 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:09:38.672583 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:09:38.673661 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:09:38.675415 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:09:38.675963 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:09:38.685249 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:09:38.696923 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:38.702247 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:09:38.709933 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:38.710733 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:38.711447 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:09:38.714234 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:09:38.714341 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:38.716021 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:09:38.716693 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:09:38.717705 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:09:38.718706 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:38.719942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:38.721259 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:09:38.722436 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:38.723659 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:09:38.724862 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:09:38.725979 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:09:38.726951 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:09:38.727035 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:38.728363 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:38.729116 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:38.730349 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:09:38.732209 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:38.733136 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:09:38.733221 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:38.734737 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:09:38.734847 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:38.735556 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:09:38.735632 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:09:38.736677 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:09:38.736753 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:09:38.746221 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:09:38.746671 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:09:38.746798 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:38.749221 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:09:38.749842 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:09:38.749982 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:38.754023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:09:38.754127 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:38.759806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:09:38.772772 ignition[1017]: INFO : Ignition 2.19.0 Sep 13 00:09:38.772772 ignition[1017]: INFO : Stage: umount Sep 13 00:09:38.772772 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:38.772772 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:09:38.759905 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:09:38.777732 ignition[1017]: INFO : umount: umount passed Sep 13 00:09:38.777732 ignition[1017]: INFO : Ignition finished successfully Sep 13 00:09:38.773562 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:09:38.774160 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:09:38.774257 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:09:38.775974 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:09:38.776039 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:09:38.779141 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:09:38.779209 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:09:38.780154 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:09:38.780188 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:09:38.781042 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:09:38.781111 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:09:38.781981 systemd[1]: Stopped target network.target - Network. Sep 13 00:09:38.782881 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:09:38.782919 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:38.783849 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:09:38.784714 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:09:38.788199 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:38.788790 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:09:38.789671 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:09:38.790705 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:09:38.790734 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:38.791842 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:09:38.791869 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:38.792716 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:09:38.792748 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:09:38.793644 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:09:38.793675 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:38.794544 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:09:38.794575 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:38.795615 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:09:38.796618 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:09:38.803113 systemd-networkd[782]: eth1: DHCPv6 lease lost Sep 13 00:09:38.806702 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:09:38.807115 systemd-networkd[782]: eth0: DHCPv6 lease lost Sep 13 00:09:38.807772 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:09:38.808986 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:09:38.809100 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:09:38.810028 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:09:38.811270 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:38.816191 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:09:38.816833 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:09:38.816880 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:38.817394 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:09:38.817429 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:38.817871 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:09:38.817901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:38.818843 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:09:38.818877 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:38.822151 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:38.830358 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:09:38.830453 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:09:38.833559 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:09:38.833699 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:38.834772 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:09:38.834804 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:38.835714 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:09:38.835740 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:38.836754 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:09:38.836792 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:38.838277 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:09:38.838312 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:38.839328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:38.839376 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:38.848205 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:09:38.848743 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:09:38.848784 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:38.849298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:38.849330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:38.854791 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:09:38.854881 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:09:38.856271 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:09:38.862183 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:09:38.869125 systemd[1]: Switching root. Sep 13 00:09:38.918102 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Sep 13 00:09:38.918177 systemd-journald[187]: Journal stopped Sep 13 00:09:39.741582 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:09:39.741638 kernel: SELinux: policy capability open_perms=1 Sep 13 00:09:39.741647 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:09:39.741659 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:09:39.741666 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:09:39.741677 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:09:39.741684 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:09:39.741695 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:09:39.741703 kernel: audit: type=1403 audit(1757722179.097:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:09:39.741713 systemd[1]: Successfully loaded SELinux policy in 39.446ms. Sep 13 00:09:39.741724 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.297ms. Sep 13 00:09:39.741733 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:39.741741 systemd[1]: Detected virtualization kvm. Sep 13 00:09:39.741752 systemd[1]: Detected architecture x86-64. Sep 13 00:09:39.741760 systemd[1]: Detected first boot. Sep 13 00:09:39.741769 systemd[1]: Hostname set to . Sep 13 00:09:39.741777 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:09:39.741785 zram_generator::config[1083]: No configuration found. Sep 13 00:09:39.741800 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:09:39.741808 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:09:39.741835 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:09:39.741844 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:09:39.741852 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:09:39.741861 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:09:39.741869 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:09:39.741877 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:09:39.741887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:09:39.741895 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:09:39.741903 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:09:39.741911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:39.741920 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:39.741927 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:09:39.741937 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:09:39.741945 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:09:39.741954 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:39.741964 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:09:39.741973 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:39.741981 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:09:39.741989 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:39.742000 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:39.742009 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:39.742018 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:39.742027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:09:39.742035 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:09:39.742043 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:09:39.742103 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:09:39.742114 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:39.742123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:39.742131 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:39.742139 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:09:39.742147 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:09:39.742159 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:09:39.742167 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:09:39.742177 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:39.742185 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:09:39.742194 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:09:39.742206 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:09:39.742216 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:09:39.742225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:39.742233 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:39.742242 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:09:39.742250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:39.742258 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:09:39.742266 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:39.742274 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:09:39.742283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:39.742292 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:09:39.742300 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:09:39.742309 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:09:39.742317 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:39.742325 kernel: loop: module loaded Sep 13 00:09:39.742333 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:39.742342 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:09:39.742351 kernel: ACPI: bus type drm_connector registered Sep 13 00:09:39.742361 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:09:39.742368 kernel: fuse: init (API version 7.39) Sep 13 00:09:39.743196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:39.743217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:39.743245 systemd-journald[1174]: Collecting audit messages is disabled. Sep 13 00:09:39.743275 systemd-journald[1174]: Journal started Sep 13 00:09:39.743293 systemd-journald[1174]: Runtime Journal (/run/log/journal/d61c6918472946fa923d04be4795a20a) is 4.8M, max 38.4M, 33.6M free. Sep 13 00:09:39.751088 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:39.748349 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:09:39.748913 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:09:39.749590 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:09:39.750388 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:09:39.751208 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:09:39.756856 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:09:39.757876 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:09:39.758708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:39.759559 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:09:39.759686 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:09:39.760443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:39.760606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:39.761433 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:09:39.761598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:09:39.762464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:39.762632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:39.763498 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:09:39.763668 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:09:39.764372 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:39.764551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:39.765377 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:39.766109 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:09:39.766829 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:09:39.775786 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:09:39.780128 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:09:39.782468 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:09:39.783288 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:09:39.792162 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:09:39.796808 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:09:39.799315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:39.805155 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:09:39.809370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:39.814135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:39.817170 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:39.819709 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:09:39.822184 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:09:39.827198 systemd-journald[1174]: Time spent on flushing to /var/log/journal/d61c6918472946fa923d04be4795a20a is 31.707ms for 1116 entries. Sep 13 00:09:39.827198 systemd-journald[1174]: System Journal (/var/log/journal/d61c6918472946fa923d04be4795a20a) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:09:39.867141 systemd-journald[1174]: Received client request to flush runtime journal. Sep 13 00:09:39.830116 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:09:39.831031 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:09:39.846406 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:39.855778 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:09:39.858726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:39.867660 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Sep 13 00:09:39.867670 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Sep 13 00:09:39.869381 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:09:39.872612 udevadm[1231]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:09:39.874743 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:39.887177 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:09:39.907508 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:09:39.914221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:39.924664 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 13 00:09:39.924682 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 13 00:09:39.927667 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:40.273430 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:09:40.280183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:40.296940 systemd-udevd[1251]: Using default interface naming scheme 'v255'. Sep 13 00:09:40.316897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:40.324948 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:40.345165 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:09:40.379481 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 13 00:09:40.387575 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:09:40.455620 systemd-networkd[1257]: lo: Link UP Sep 13 00:09:40.455875 systemd-networkd[1257]: lo: Gained carrier Sep 13 00:09:40.458376 systemd-networkd[1257]: Enumeration completed Sep 13 00:09:40.458501 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:40.465282 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:40.465332 systemd-networkd[1257]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:40.466311 systemd-networkd[1257]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:40.466441 systemd-networkd[1257]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:40.467465 systemd-networkd[1257]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:40.467488 systemd-networkd[1257]: eth0: Link UP Sep 13 00:09:40.467491 systemd-networkd[1257]: eth0: Gained carrier Sep 13 00:09:40.467498 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:40.471081 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1258) Sep 13 00:09:40.472311 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:09:40.473790 systemd-networkd[1257]: eth1: Link UP Sep 13 00:09:40.473949 systemd-networkd[1257]: eth1: Gained carrier Sep 13 00:09:40.474029 systemd-networkd[1257]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:40.485699 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:40.509209 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:09:40.512462 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:09:40.513126 systemd-networkd[1257]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:09:40.515101 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:09:40.519082 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:09:40.526649 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Sep 13 00:09:40.526672 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 13 00:09:40.526711 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:40.527324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:40.532175 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:40.533421 systemd-networkd[1257]: eth0: DHCPv4 address 65.21.60.153/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:09:40.534171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:40.543185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:40.546779 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:09:40.546839 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:09:40.546875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:40.547160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:40.547312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:40.559365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:40.559507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:40.560165 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:40.560795 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:40.560939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:40.561565 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:40.576078 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:09:40.580671 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:09:40.580870 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:09:40.580981 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:09:40.593085 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:09:40.606566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:40.610792 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 13 00:09:40.610885 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 13 00:09:40.616678 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:09:40.616708 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:09:40.616719 kernel: [drm] features: -context_init Sep 13 00:09:40.616728 kernel: [drm] number of scanouts: 1 Sep 13 00:09:40.616736 kernel: [drm] number of cap sets: 0 Sep 13 00:09:40.621067 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:09:40.623158 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 13 00:09:40.625241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:40.625434 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:40.634092 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:09:40.642091 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:09:40.649232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:40.659540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:40.659858 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:40.667262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:40.712258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:40.765978 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:09:40.773285 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:09:40.785962 lvm[1323]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:09:40.810697 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:09:40.810951 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:40.816219 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:09:40.821305 lvm[1326]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:09:40.850752 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:09:40.851019 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:09:40.851135 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:09:40.851156 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:40.851251 systemd[1]: Reached target machines.target - Containers. Sep 13 00:09:40.852942 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:09:40.858274 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:09:40.860121 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:09:40.862222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:40.865134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:09:40.868219 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:09:40.872344 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:09:40.875204 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:09:40.891121 kernel: loop0: detected capacity change from 0 to 8 Sep 13 00:09:40.901447 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:09:40.903126 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:09:40.914659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:09:40.918649 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:09:40.931214 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:09:40.971077 kernel: loop2: detected capacity change from 0 to 221472 Sep 13 00:09:41.011099 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 00:09:41.063124 kernel: loop4: detected capacity change from 0 to 8 Sep 13 00:09:41.067239 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:09:41.088111 kernel: loop6: detected capacity change from 0 to 221472 Sep 13 00:09:41.113469 kernel: loop7: detected capacity change from 0 to 142488 Sep 13 00:09:41.129469 (sd-merge)[1347]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:09:41.129931 (sd-merge)[1347]: Merged extensions into '/usr'. Sep 13 00:09:41.143385 systemd[1]: Reloading requested from client PID 1334 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:09:41.143406 systemd[1]: Reloading... Sep 13 00:09:41.214086 zram_generator::config[1378]: No configuration found. Sep 13 00:09:41.289074 ldconfig[1330]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:09:41.309681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:41.365993 systemd[1]: Reloading finished in 222 ms. Sep 13 00:09:41.380887 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:09:41.385678 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:09:41.402180 systemd[1]: Starting ensure-sysext.service... Sep 13 00:09:41.404160 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:41.418167 systemd[1]: Reloading requested from client PID 1425 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:09:41.418436 systemd[1]: Reloading... Sep 13 00:09:41.432477 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:09:41.432779 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:09:41.433563 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:09:41.433805 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Sep 13 00:09:41.433883 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Sep 13 00:09:41.437477 systemd-tmpfiles[1426]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:09:41.437490 systemd-tmpfiles[1426]: Skipping /boot Sep 13 00:09:41.446104 systemd-tmpfiles[1426]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:09:41.446195 systemd-tmpfiles[1426]: Skipping /boot Sep 13 00:09:41.474520 zram_generator::config[1451]: No configuration found. Sep 13 00:09:41.587690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:41.652608 systemd[1]: Reloading finished in 233 ms. Sep 13 00:09:41.670888 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:41.685615 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:09:41.697158 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:09:41.700268 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:09:41.703945 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:41.709565 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:09:41.725867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:41.726389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:41.731244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:41.738304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:41.748072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:41.748606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:41.748696 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:41.755766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:41.756305 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:41.764529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:41.765257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:41.768120 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:41.768267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:41.775513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:41.775679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:41.780273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:41.787103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:41.794245 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:41.794724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:41.794834 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:41.796754 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:09:41.808028 augenrules[1542]: No rules Sep 13 00:09:41.813269 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:09:41.814216 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:09:41.814898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:41.815022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:41.823578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:41.823720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:41.826836 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:41.826962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:41.834943 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:41.835160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:41.846623 systemd-resolved[1510]: Positive Trust Anchors: Sep 13 00:09:41.846775 systemd-resolved[1510]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:41.846802 systemd-resolved[1510]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:41.849778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:41.851611 systemd-resolved[1510]: Using system hostname 'ci-4081-3-5-n-662926fb9e'. Sep 13 00:09:41.853786 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:09:41.858414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:41.866210 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:41.867130 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:41.877628 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:09:41.882019 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:41.882772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:41.886305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:41.886484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:41.889360 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:09:41.889606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:09:41.890914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:41.892144 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:41.894161 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:41.894352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:41.896763 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:09:41.902849 systemd[1]: Finished ensure-sysext.service. Sep 13 00:09:41.911727 systemd[1]: Reached target network.target - Network. Sep 13 00:09:41.914397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:41.915145 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:41.915211 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:41.922226 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:09:41.932608 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:09:41.934874 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:09:41.971932 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:09:41.973672 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:41.974194 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:09:41.974820 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:09:41.975477 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:09:41.975942 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:09:41.976019 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:41.976487 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:09:41.977243 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:09:41.977777 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:09:41.979014 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:41.984403 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:09:41.987096 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:09:41.994556 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:09:41.997769 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:09:41.998931 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:41.999336 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:42.000016 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:09:42.000140 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:09:42.000170 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:09:42.005100 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:09:42.011281 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:09:42.014200 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:09:42.021139 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:09:42.025440 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:09:42.025878 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:09:42.033937 jq[1587]: false Sep 13 00:09:42.035280 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:09:42.040023 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:09:42.047514 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:09:42.060447 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:09:42.067761 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:09:42.082179 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:09:42.083164 coreos-metadata[1583]: Sep 13 00:09:42.082 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found loop4 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found loop5 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found loop6 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found loop7 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found sda Sep 13 00:09:42.084939 extend-filesystems[1588]: Found sda1 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found sda2 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found sda3 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found usr Sep 13 00:09:42.084939 extend-filesystems[1588]: Found sda4 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found sda6 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found sda7 Sep 13 00:09:42.084939 extend-filesystems[1588]: Found sda9 Sep 13 00:09:42.115118 extend-filesystems[1588]: Checking size of /dev/sda9 Sep 13 00:09:42.088308 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:09:42.115724 coreos-metadata[1583]: Sep 13 00:09:42.085 INFO Fetch successful Sep 13 00:09:42.115724 coreos-metadata[1583]: Sep 13 00:09:42.085 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:09:42.115724 coreos-metadata[1583]: Sep 13 00:09:42.086 INFO Fetch successful Sep 13 00:09:42.097939 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:09:42.115662 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:09:42.126214 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:09:42.126451 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:09:42.126650 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:09:42.126800 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:09:42.131264 jq[1613]: true Sep 13 00:09:42.130988 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:09:42.133708 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:09:42.161385 extend-filesystems[1588]: Resized partition /dev/sda9 Sep 13 00:09:42.163956 update_engine[1608]: I20250913 00:09:42.162326 1608 main.cc:92] Flatcar Update Engine starting Sep 13 00:09:42.172272 extend-filesystems[1626]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:09:42.197168 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:09:42.172814 systemd-logind[1604]: New seat seat0. Sep 13 00:09:42.197429 update_engine[1608]: I20250913 00:09:42.183594 1608 update_check_scheduler.cc:74] Next update check in 5m31s Sep 13 00:09:42.173399 dbus-daemon[1584]: [system] SELinux support is enabled Sep 13 00:09:42.197624 jq[1619]: true Sep 13 00:09:42.174109 (ntainerd)[1620]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:09:42.174453 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:09:42.201482 systemd-logind[1604]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:09:42.201496 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:09:42.205123 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:09:42.232849 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:09:42.233190 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:09:42.241149 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:09:42.241282 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:09:42.241699 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:09:42.241787 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:09:42.247690 tar[1617]: linux-amd64/helm Sep 13 00:09:42.246863 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:09:42.249530 systemd-networkd[1257]: eth1: Gained IPv6LL Sep 13 00:09:42.254753 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:09:42.280468 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:09:42.283886 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:09:42.748380 systemd-resolved[1510]: Clock change detected. Flushing caches. Sep 13 00:09:42.752629 systemd-timesyncd[1575]: Contacted time server 57.129.38.82:123 (0.flatcar.pool.ntp.org). Sep 13 00:09:42.752665 systemd-timesyncd[1575]: Initial clock synchronization to Sat 2025-09-13 00:09:42.748350 UTC. Sep 13 00:09:42.755441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:42.760639 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:09:42.766416 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:09:42.781020 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:09:42.790329 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1261) Sep 13 00:09:42.834740 systemd-networkd[1257]: eth0: Gained IPv6LL Sep 13 00:09:42.854566 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:09:42.890181 bash[1673]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:09:42.896512 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:09:42.906042 systemd[1]: Starting sshkeys.service... Sep 13 00:09:42.927852 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:09:42.936680 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:09:42.970511 coreos-metadata[1693]: Sep 13 00:09:42.969 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:09:42.971222 coreos-metadata[1693]: Sep 13 00:09:42.970 INFO Fetch successful Sep 13 00:09:42.974149 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:09:42.977837 unknown[1693]: wrote ssh authorized keys file for user: core Sep 13 00:09:42.985690 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:09:43.013232 containerd[1620]: time="2025-09-13T00:09:43.011592985Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:09:43.013447 extend-filesystems[1626]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:09:43.013447 extend-filesystems[1626]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:09:43.013447 extend-filesystems[1626]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:09:43.027555 extend-filesystems[1588]: Resized filesystem in /dev/sda9 Sep 13 00:09:43.027555 extend-filesystems[1588]: Found sr0 Sep 13 00:09:43.022378 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:09:43.022564 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:09:43.047658 update-ssh-keys[1697]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:09:43.050465 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:09:43.057974 systemd[1]: Finished sshkeys.service. Sep 13 00:09:43.064965 containerd[1620]: time="2025-09-13T00:09:43.064936768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:43.066302 containerd[1620]: time="2025-09-13T00:09:43.066278934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:43.066392 containerd[1620]: time="2025-09-13T00:09:43.066378942Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:09:43.066438 containerd[1620]: time="2025-09-13T00:09:43.066428415Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:09:43.066628 containerd[1620]: time="2025-09-13T00:09:43.066612179Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:09:43.066684 containerd[1620]: time="2025-09-13T00:09:43.066675027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:43.066812 containerd[1620]: time="2025-09-13T00:09:43.066797226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:43.066857 containerd[1620]: time="2025-09-13T00:09:43.066847971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:43.067066 containerd[1620]: time="2025-09-13T00:09:43.067049599Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:43.067137 containerd[1620]: time="2025-09-13T00:09:43.067126453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:43.067184 containerd[1620]: time="2025-09-13T00:09:43.067173391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:43.067222 containerd[1620]: time="2025-09-13T00:09:43.067213496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:43.067347 containerd[1620]: time="2025-09-13T00:09:43.067333682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:43.067571 containerd[1620]: time="2025-09-13T00:09:43.067556660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:43.067727 containerd[1620]: time="2025-09-13T00:09:43.067712141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:43.067768 containerd[1620]: time="2025-09-13T00:09:43.067759409Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:09:43.067867 containerd[1620]: time="2025-09-13T00:09:43.067854658Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:09:43.067959 containerd[1620]: time="2025-09-13T00:09:43.067946841Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:09:43.079219 containerd[1620]: time="2025-09-13T00:09:43.079178507Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:09:43.079290 containerd[1620]: time="2025-09-13T00:09:43.079244912Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:09:43.079290 containerd[1620]: time="2025-09-13T00:09:43.079261202Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:09:43.079290 containerd[1620]: time="2025-09-13T00:09:43.079283915Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:09:43.079382 containerd[1620]: time="2025-09-13T00:09:43.079295627Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:09:43.080855 containerd[1620]: time="2025-09-13T00:09:43.080830704Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:09:43.081122 containerd[1620]: time="2025-09-13T00:09:43.081102133Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:09:43.081218 containerd[1620]: time="2025-09-13T00:09:43.081198193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:09:43.081243 containerd[1620]: time="2025-09-13T00:09:43.081219343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:09:43.081243 containerd[1620]: time="2025-09-13T00:09:43.081231736Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:09:43.081274 containerd[1620]: time="2025-09-13T00:09:43.081242957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:09:43.081274 containerd[1620]: time="2025-09-13T00:09:43.081255701Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:09:43.081274 containerd[1620]: time="2025-09-13T00:09:43.081268134Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:09:43.081324 containerd[1620]: time="2025-09-13T00:09:43.081279566Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:09:43.081324 containerd[1620]: time="2025-09-13T00:09:43.081291057Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:09:43.084110 containerd[1620]: time="2025-09-13T00:09:43.081301768Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:09:43.084143 containerd[1620]: time="2025-09-13T00:09:43.084123297Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:09:43.084269 containerd[1620]: time="2025-09-13T00:09:43.084150969Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:09:43.084269 containerd[1620]: time="2025-09-13T00:09:43.084188008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.084269 containerd[1620]: time="2025-09-13T00:09:43.084203858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.084269 containerd[1620]: time="2025-09-13T00:09:43.084214408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.084269 containerd[1620]: time="2025-09-13T00:09:43.084239886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.084269 containerd[1620]: time="2025-09-13T00:09:43.084258761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.084386 containerd[1620]: time="2025-09-13T00:09:43.084275242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.084386 containerd[1620]: time="2025-09-13T00:09:43.084285131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.084386 containerd[1620]: time="2025-09-13T00:09:43.084295049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086193408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086223464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086243181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086255524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086268208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086283227Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086302743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086330695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086340002Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:09:43.086416 containerd[1620]: time="2025-09-13T00:09:43.086403903Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:09:43.086632 containerd[1620]: time="2025-09-13T00:09:43.086421325Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:09:43.086632 containerd[1620]: time="2025-09-13T00:09:43.086430583Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:09:43.086632 containerd[1620]: time="2025-09-13T00:09:43.086440010Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:09:43.086632 containerd[1620]: time="2025-09-13T00:09:43.086450530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.086632 containerd[1620]: time="2025-09-13T00:09:43.086461080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:09:43.086632 containerd[1620]: time="2025-09-13T00:09:43.086474064Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:09:43.086632 containerd[1620]: time="2025-09-13T00:09:43.086504822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:09:43.087223 containerd[1620]: time="2025-09-13T00:09:43.086761283Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:09:43.087223 containerd[1620]: time="2025-09-13T00:09:43.086816035Z" level=info msg="Connect containerd service" Sep 13 00:09:43.087223 containerd[1620]: time="2025-09-13T00:09:43.086848867Z" level=info msg="using legacy CRI server" Sep 13 00:09:43.087223 containerd[1620]: time="2025-09-13T00:09:43.086855529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:09:43.087223 containerd[1620]: time="2025-09-13T00:09:43.086930740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:09:43.087482 containerd[1620]: time="2025-09-13T00:09:43.087435356Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087527128Z" level=info msg="Start subscribing containerd event" Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087604603Z" level=info msg="Start recovering state" Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087669936Z" level=info msg="Start event monitor" Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087684193Z" level=info msg="Start snapshots syncer" Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087692448Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087698018Z" level=info msg="Start streaming server" Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087775684Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087833453Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:09:43.094371 containerd[1620]: time="2025-09-13T00:09:43.087892804Z" level=info msg="containerd successfully booted in 0.078617s" Sep 13 00:09:43.087993 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:09:43.160662 sshd_keygen[1623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:09:43.185857 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:09:43.194548 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:09:43.204423 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:09:43.204662 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:09:43.211679 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:09:43.230694 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:09:43.241619 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:09:43.253981 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:09:43.256777 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:09:43.389226 tar[1617]: linux-amd64/LICENSE Sep 13 00:09:43.389226 tar[1617]: linux-amd64/README.md Sep 13 00:09:43.398898 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:09:43.940510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:43.947673 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:09:43.948442 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:09:43.951249 systemd[1]: Startup finished in 6.595s (kernel) + 4.434s (userspace) = 11.030s. Sep 13 00:09:44.516414 kubelet[1743]: E0913 00:09:44.516346 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:09:44.518860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:09:44.519143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:09:52.141770 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:09:52.153695 systemd[1]: Started sshd@0-65.21.60.153:22-147.75.109.163:49772.service - OpenSSH per-connection server daemon (147.75.109.163:49772). Sep 13 00:09:53.228412 sshd[1755]: Accepted publickey for core from 147.75.109.163 port 49772 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:53.230015 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:53.237158 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:09:53.242679 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:09:53.245072 systemd-logind[1604]: New session 1 of user core. Sep 13 00:09:53.256800 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:09:53.265588 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:09:53.268993 (systemd)[1761]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:09:53.364283 systemd[1761]: Queued start job for default target default.target. Sep 13 00:09:53.364806 systemd[1761]: Created slice app.slice - User Application Slice. Sep 13 00:09:53.364828 systemd[1761]: Reached target paths.target - Paths. Sep 13 00:09:53.364838 systemd[1761]: Reached target timers.target - Timers. Sep 13 00:09:53.374403 systemd[1761]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:09:53.379717 systemd[1761]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:09:53.379767 systemd[1761]: Reached target sockets.target - Sockets. Sep 13 00:09:53.379781 systemd[1761]: Reached target basic.target - Basic System. Sep 13 00:09:53.379813 systemd[1761]: Reached target default.target - Main User Target. Sep 13 00:09:53.379837 systemd[1761]: Startup finished in 105ms. Sep 13 00:09:53.379927 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:09:53.381980 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:09:54.109678 systemd[1]: Started sshd@1-65.21.60.153:22-147.75.109.163:49780.service - OpenSSH per-connection server daemon (147.75.109.163:49780). Sep 13 00:09:54.537954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:09:54.543676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:54.641983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:54.643427 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:09:54.672178 kubelet[1787]: E0913 00:09:54.672130 1787 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:09:54.674869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:09:54.675030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:09:55.074540 sshd[1773]: Accepted publickey for core from 147.75.109.163 port 49780 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:55.075764 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:55.080145 systemd-logind[1604]: New session 2 of user core. Sep 13 00:09:55.097553 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:09:55.748454 sshd[1773]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:55.751499 systemd-logind[1604]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:09:55.752530 systemd[1]: sshd@1-65.21.60.153:22-147.75.109.163:49780.service: Deactivated successfully. Sep 13 00:09:55.754126 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:09:55.755397 systemd-logind[1604]: Removed session 2. Sep 13 00:09:55.916671 systemd[1]: Started sshd@2-65.21.60.153:22-147.75.109.163:49792.service - OpenSSH per-connection server daemon (147.75.109.163:49792). Sep 13 00:09:56.879739 sshd[1801]: Accepted publickey for core from 147.75.109.163 port 49792 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:56.881207 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:56.886460 systemd-logind[1604]: New session 3 of user core. Sep 13 00:09:56.892624 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:09:57.550214 sshd[1801]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:57.553199 systemd[1]: sshd@2-65.21.60.153:22-147.75.109.163:49792.service: Deactivated successfully. Sep 13 00:09:57.555856 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:09:57.556355 systemd-logind[1604]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:09:57.557291 systemd-logind[1604]: Removed session 3. Sep 13 00:09:57.746515 systemd[1]: Started sshd@3-65.21.60.153:22-147.75.109.163:49794.service - OpenSSH per-connection server daemon (147.75.109.163:49794). Sep 13 00:09:58.818886 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 49794 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:09:58.820232 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:58.824683 systemd-logind[1604]: New session 4 of user core. Sep 13 00:09:58.833560 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:09:59.562620 sshd[1809]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:59.566736 systemd[1]: sshd@3-65.21.60.153:22-147.75.109.163:49794.service: Deactivated successfully. Sep 13 00:09:59.567972 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:09:59.568734 systemd-logind[1604]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:09:59.569913 systemd-logind[1604]: Removed session 4. Sep 13 00:09:59.720545 systemd[1]: Started sshd@4-65.21.60.153:22-147.75.109.163:49804.service - OpenSSH per-connection server daemon (147.75.109.163:49804). Sep 13 00:10:00.685996 sshd[1817]: Accepted publickey for core from 147.75.109.163 port 49804 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:10:00.687640 sshd[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:00.692090 systemd-logind[1604]: New session 5 of user core. Sep 13 00:10:00.702584 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:10:01.209778 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:10:01.210078 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:01.225565 sudo[1821]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:01.386431 sshd[1817]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:01.389330 systemd[1]: sshd@4-65.21.60.153:22-147.75.109.163:49804.service: Deactivated successfully. Sep 13 00:10:01.392881 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:10:01.392882 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:10:01.394082 systemd-logind[1604]: Removed session 5. Sep 13 00:10:01.548543 systemd[1]: Started sshd@5-65.21.60.153:22-147.75.109.163:47938.service - OpenSSH per-connection server daemon (147.75.109.163:47938). Sep 13 00:10:02.517599 sshd[1826]: Accepted publickey for core from 147.75.109.163 port 47938 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:10:02.519199 sshd[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:02.524160 systemd-logind[1604]: New session 6 of user core. Sep 13 00:10:02.533554 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:10:03.035298 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:10:03.035718 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:03.039266 sudo[1831]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:03.044005 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:10:03.044293 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:03.063609 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:10:03.065055 auditctl[1834]: No rules Sep 13 00:10:03.065465 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:10:03.065732 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:10:03.072934 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:10:03.091414 augenrules[1853]: No rules Sep 13 00:10:03.092672 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:10:03.094089 sudo[1830]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:03.251740 sshd[1826]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:03.256049 systemd[1]: sshd@5-65.21.60.153:22-147.75.109.163:47938.service: Deactivated successfully. Sep 13 00:10:03.256400 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:10:03.258207 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:10:03.260531 systemd-logind[1604]: Removed session 6. Sep 13 00:10:03.413552 systemd[1]: Started sshd@6-65.21.60.153:22-147.75.109.163:47940.service - OpenSSH per-connection server daemon (147.75.109.163:47940). Sep 13 00:10:04.377942 sshd[1862]: Accepted publickey for core from 147.75.109.163 port 47940 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:10:04.380548 sshd[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:04.385518 systemd-logind[1604]: New session 7 of user core. Sep 13 00:10:04.400580 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:10:04.787831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:10:04.793446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:04.894092 sudo[1876]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:10:04.894323 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:04.895458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:04.899497 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:04.937412 kubelet[1878]: E0913 00:10:04.937284 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:04.939281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:04.939570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:05.200827 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:10:05.204881 (dockerd)[1902]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:10:05.648182 dockerd[1902]: time="2025-09-13T00:10:05.647905820Z" level=info msg="Starting up" Sep 13 00:10:05.787087 systemd[1]: var-lib-docker-metacopy\x2dcheck1394374186-merged.mount: Deactivated successfully. Sep 13 00:10:05.817115 dockerd[1902]: time="2025-09-13T00:10:05.817033733Z" level=info msg="Loading containers: start." Sep 13 00:10:05.957434 kernel: Initializing XFRM netlink socket Sep 13 00:10:06.065671 systemd-networkd[1257]: docker0: Link UP Sep 13 00:10:06.088671 dockerd[1902]: time="2025-09-13T00:10:06.088618321Z" level=info msg="Loading containers: done." Sep 13 00:10:06.101618 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck45994779-merged.mount: Deactivated successfully. Sep 13 00:10:06.104108 dockerd[1902]: time="2025-09-13T00:10:06.104069247Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:10:06.104205 dockerd[1902]: time="2025-09-13T00:10:06.104161930Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:10:06.104294 dockerd[1902]: time="2025-09-13T00:10:06.104263531Z" level=info msg="Daemon has completed initialization" Sep 13 00:10:06.128411 dockerd[1902]: time="2025-09-13T00:10:06.128291636Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:10:06.128690 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:10:07.314795 containerd[1620]: time="2025-09-13T00:10:07.314736707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:10:07.841693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1823300229.mount: Deactivated successfully. Sep 13 00:10:09.048458 containerd[1620]: time="2025-09-13T00:10:09.048403441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:09.049678 containerd[1620]: time="2025-09-13T00:10:09.049617818Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117224" Sep 13 00:10:09.050276 containerd[1620]: time="2025-09-13T00:10:09.050233232Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:09.052795 containerd[1620]: time="2025-09-13T00:10:09.052762413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:09.054063 containerd[1620]: time="2025-09-13T00:10:09.053894747Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.739113505s" Sep 13 00:10:09.054063 containerd[1620]: time="2025-09-13T00:10:09.053932326Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:10:09.055261 containerd[1620]: time="2025-09-13T00:10:09.055088384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:10:10.193500 containerd[1620]: time="2025-09-13T00:10:10.193443702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:10.194730 containerd[1620]: time="2025-09-13T00:10:10.194501023Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716654" Sep 13 00:10:10.197359 containerd[1620]: time="2025-09-13T00:10:10.195501339Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:10.198450 containerd[1620]: time="2025-09-13T00:10:10.198425771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:10.199287 containerd[1620]: time="2025-09-13T00:10:10.199259494Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.144142447s" Sep 13 00:10:10.199355 containerd[1620]: time="2025-09-13T00:10:10.199289701Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:10:10.199731 containerd[1620]: time="2025-09-13T00:10:10.199703948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:10:11.182941 containerd[1620]: time="2025-09-13T00:10:11.182887035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:11.183847 containerd[1620]: time="2025-09-13T00:10:11.183810145Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787720" Sep 13 00:10:11.185441 containerd[1620]: time="2025-09-13T00:10:11.184342824Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:11.187585 containerd[1620]: time="2025-09-13T00:10:11.186541887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:11.187585 containerd[1620]: time="2025-09-13T00:10:11.187471830Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 987.736754ms" Sep 13 00:10:11.187585 containerd[1620]: time="2025-09-13T00:10:11.187503950Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:10:11.188468 containerd[1620]: time="2025-09-13T00:10:11.188333585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:10:12.123761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117125622.mount: Deactivated successfully. Sep 13 00:10:12.425847 containerd[1620]: time="2025-09-13T00:10:12.425421917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:12.426946 containerd[1620]: time="2025-09-13T00:10:12.426908895Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410280" Sep 13 00:10:12.427414 containerd[1620]: time="2025-09-13T00:10:12.427382583Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:12.429094 containerd[1620]: time="2025-09-13T00:10:12.429049137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:12.429839 containerd[1620]: time="2025-09-13T00:10:12.429769608Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.241410293s" Sep 13 00:10:12.429839 containerd[1620]: time="2025-09-13T00:10:12.429800216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:10:12.430535 containerd[1620]: time="2025-09-13T00:10:12.430398387Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:10:12.931816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124583047.mount: Deactivated successfully. Sep 13 00:10:13.627537 containerd[1620]: time="2025-09-13T00:10:13.627485910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:13.628539 containerd[1620]: time="2025-09-13T00:10:13.628399422Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Sep 13 00:10:13.630325 containerd[1620]: time="2025-09-13T00:10:13.629239807Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:13.631747 containerd[1620]: time="2025-09-13T00:10:13.631715338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:13.632893 containerd[1620]: time="2025-09-13T00:10:13.632742724Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.202315534s" Sep 13 00:10:13.632893 containerd[1620]: time="2025-09-13T00:10:13.632781987Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:10:13.633201 containerd[1620]: time="2025-09-13T00:10:13.633174594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:10:14.059854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606118774.mount: Deactivated successfully. Sep 13 00:10:14.064762 containerd[1620]: time="2025-09-13T00:10:14.064710641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:14.065597 containerd[1620]: time="2025-09-13T00:10:14.065549715Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 13 00:10:14.067748 containerd[1620]: time="2025-09-13T00:10:14.066275174Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:14.069355 containerd[1620]: time="2025-09-13T00:10:14.068544880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:14.070346 containerd[1620]: time="2025-09-13T00:10:14.069594547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 436.39114ms" Sep 13 00:10:14.070346 containerd[1620]: time="2025-09-13T00:10:14.069633621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:10:14.070346 containerd[1620]: time="2025-09-13T00:10:14.070098051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:10:14.562910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065547898.mount: Deactivated successfully. Sep 13 00:10:15.037857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:10:15.045455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:15.151575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:15.167610 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:15.238974 kubelet[2229]: E0913 00:10:15.238926 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:15.241915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:15.242056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:15.945212 containerd[1620]: time="2025-09-13T00:10:15.945107181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:15.946364 containerd[1620]: time="2025-09-13T00:10:15.946168209Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910785" Sep 13 00:10:15.947517 containerd[1620]: time="2025-09-13T00:10:15.947214258Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:15.949609 containerd[1620]: time="2025-09-13T00:10:15.949583557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:15.950646 containerd[1620]: time="2025-09-13T00:10:15.950625017Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.880498803s" Sep 13 00:10:15.950721 containerd[1620]: time="2025-09-13T00:10:15.950708644Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:10:18.816708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:18.822478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:18.846660 systemd[1]: Reloading requested from client PID 2274 ('systemctl') (unit session-7.scope)... Sep 13 00:10:18.846786 systemd[1]: Reloading... Sep 13 00:10:18.923340 zram_generator::config[2314]: No configuration found. Sep 13 00:10:19.013206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:19.071454 systemd[1]: Reloading finished in 224 ms. Sep 13 00:10:19.108565 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:10:19.108622 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:10:19.108898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:19.112677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:19.194443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:19.202659 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:10:19.239042 kubelet[2379]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:19.239042 kubelet[2379]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:10:19.239042 kubelet[2379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:19.239455 kubelet[2379]: I0913 00:10:19.239110 2379 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:10:19.837105 kubelet[2379]: I0913 00:10:19.837043 2379 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:10:19.837105 kubelet[2379]: I0913 00:10:19.837079 2379 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:10:19.837347 kubelet[2379]: I0913 00:10:19.837330 2379 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:10:19.867097 kubelet[2379]: E0913 00:10:19.866981 2379 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://65.21.60.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:19.867097 kubelet[2379]: I0913 00:10:19.867007 2379 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:19.877466 kubelet[2379]: E0913 00:10:19.877379 2379 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:19.877466 kubelet[2379]: I0913 00:10:19.877410 2379 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:19.883695 kubelet[2379]: I0913 00:10:19.883663 2379 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:19.885508 kubelet[2379]: I0913 00:10:19.885474 2379 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:10:19.885639 kubelet[2379]: I0913 00:10:19.885593 2379 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:19.885793 kubelet[2379]: I0913 00:10:19.885640 2379 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-662926fb9e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:10:19.885793 kubelet[2379]: I0913 00:10:19.885793 2379 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:19.885904 kubelet[2379]: I0913 00:10:19.885802 2379 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:10:19.885904 kubelet[2379]: I0913 00:10:19.885894 2379 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:19.888711 kubelet[2379]: I0913 00:10:19.888470 2379 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:10:19.888711 kubelet[2379]: I0913 00:10:19.888493 2379 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:19.888711 kubelet[2379]: I0913 00:10:19.888517 2379 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:10:19.888711 kubelet[2379]: I0913 00:10:19.888529 2379 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:19.893336 kubelet[2379]: W0913 00:10:19.893170 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.21.60.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-662926fb9e&limit=500&resourceVersion=0": dial tcp 65.21.60.153:6443: connect: connection refused Sep 13 00:10:19.893336 kubelet[2379]: E0913 00:10:19.893249 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.21.60.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-662926fb9e&limit=500&resourceVersion=0\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:19.894751 kubelet[2379]: W0913 00:10:19.894502 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.21.60.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.21.60.153:6443: connect: connection refused Sep 13 00:10:19.894751 kubelet[2379]: E0913 00:10:19.894546 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.21.60.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:19.894751 kubelet[2379]: I0913 00:10:19.894622 2379 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:10:19.897545 kubelet[2379]: I0913 00:10:19.897515 2379 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:19.898136 kubelet[2379]: W0913 00:10:19.898108 2379 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:10:19.899504 kubelet[2379]: I0913 00:10:19.899214 2379 server.go:1274] "Started kubelet" Sep 13 00:10:19.900340 kubelet[2379]: I0913 00:10:19.900073 2379 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:19.902322 kubelet[2379]: I0913 00:10:19.901774 2379 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:10:19.907479 kubelet[2379]: I0913 00:10:19.907438 2379 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:19.908132 kubelet[2379]: I0913 00:10:19.908088 2379 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:19.909371 kubelet[2379]: I0913 00:10:19.909357 2379 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:19.910643 kubelet[2379]: I0913 00:10:19.910614 2379 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:19.912138 kubelet[2379]: I0913 00:10:19.912102 2379 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:10:19.912328 kubelet[2379]: E0913 00:10:19.912289 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:19.915842 kubelet[2379]: E0913 00:10:19.910860 2379 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.21.60.153:6443/api/v1/namespaces/default/events\": dial tcp 65.21.60.153:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-662926fb9e.1864af0bd2e6c9bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-662926fb9e,UID:ci-4081-3-5-n-662926fb9e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-662926fb9e,},FirstTimestamp:2025-09-13 00:10:19.899177405 +0000 UTC m=+0.693530339,LastTimestamp:2025-09-13 00:10:19.899177405 +0000 UTC m=+0.693530339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-662926fb9e,}" Sep 13 00:10:19.916339 kubelet[2379]: E0913 00:10:19.916271 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.60.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-662926fb9e?timeout=10s\": dial tcp 65.21.60.153:6443: connect: connection refused" interval="200ms" Sep 13 00:10:19.918620 kubelet[2379]: I0913 00:10:19.918603 2379 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:10:19.918894 kubelet[2379]: I0913 00:10:19.918739 2379 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:19.918894 kubelet[2379]: I0913 00:10:19.918788 2379 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:19.918894 kubelet[2379]: I0913 00:10:19.918868 2379 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:19.920363 kubelet[2379]: W0913 00:10:19.919967 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.21.60.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.21.60.153:6443: connect: connection refused Sep 13 00:10:19.920363 kubelet[2379]: E0913 00:10:19.920017 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.21.60.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:19.921085 kubelet[2379]: E0913 00:10:19.921057 2379 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:19.921377 kubelet[2379]: I0913 00:10:19.921356 2379 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:19.931245 kubelet[2379]: I0913 00:10:19.931204 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:19.933219 kubelet[2379]: I0913 00:10:19.933173 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:19.933219 kubelet[2379]: I0913 00:10:19.933206 2379 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:10:19.933219 kubelet[2379]: I0913 00:10:19.933222 2379 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:10:19.933358 kubelet[2379]: E0913 00:10:19.933253 2379 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:19.939620 kubelet[2379]: W0913 00:10:19.939575 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.21.60.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.21.60.153:6443: connect: connection refused Sep 13 00:10:19.939715 kubelet[2379]: E0913 00:10:19.939625 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.21.60.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:19.946932 kubelet[2379]: I0913 00:10:19.946909 2379 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:10:19.946932 kubelet[2379]: I0913 00:10:19.946925 2379 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:19.947015 kubelet[2379]: I0913 00:10:19.946939 2379 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:19.948856 kubelet[2379]: I0913 00:10:19.948841 2379 policy_none.go:49] "None policy: Start" Sep 13 00:10:19.949304 kubelet[2379]: I0913 00:10:19.949282 2379 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:10:19.949378 kubelet[2379]: I0913 00:10:19.949368 2379 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:19.953327 kubelet[2379]: I0913 00:10:19.952913 2379 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:19.953327 kubelet[2379]: I0913 00:10:19.953059 2379 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:19.953327 kubelet[2379]: I0913 00:10:19.953077 2379 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:19.954102 kubelet[2379]: I0913 00:10:19.954081 2379 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:19.955454 kubelet[2379]: E0913 00:10:19.955432 2379 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:20.055620 kubelet[2379]: I0913 00:10:20.055579 2379 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.057386 kubelet[2379]: E0913 00:10:20.056236 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://65.21.60.153:6443/api/v1/nodes\": dial tcp 65.21.60.153:6443: connect: connection refused" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.117998 kubelet[2379]: E0913 00:10:20.117802 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.60.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-662926fb9e?timeout=10s\": dial tcp 65.21.60.153:6443: connect: connection refused" interval="400ms" Sep 13 00:10:20.119672 kubelet[2379]: I0913 00:10:20.119626 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b52c4868ad785a9ff0ee5eb30ba521c3-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-662926fb9e\" (UID: \"b52c4868ad785a9ff0ee5eb30ba521c3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.119672 kubelet[2379]: I0913 00:10:20.119667 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b52c4868ad785a9ff0ee5eb30ba521c3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-662926fb9e\" (UID: \"b52c4868ad785a9ff0ee5eb30ba521c3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.119876 kubelet[2379]: I0913 00:10:20.119687 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bed4bcb68e8202e8ac9599eb94618787-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-662926fb9e\" (UID: \"bed4bcb68e8202e8ac9599eb94618787\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.119876 kubelet[2379]: I0913 00:10:20.119703 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.119876 kubelet[2379]: I0913 00:10:20.119722 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.119876 kubelet[2379]: I0913 00:10:20.119736 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b52c4868ad785a9ff0ee5eb30ba521c3-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-662926fb9e\" (UID: \"b52c4868ad785a9ff0ee5eb30ba521c3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.119876 kubelet[2379]: I0913 00:10:20.119750 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.120083 kubelet[2379]: I0913 00:10:20.119766 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.120083 kubelet[2379]: I0913 00:10:20.119779 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.258741 kubelet[2379]: I0913 00:10:20.258713 2379 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.259114 kubelet[2379]: E0913 00:10:20.259048 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://65.21.60.153:6443/api/v1/nodes\": dial tcp 65.21.60.153:6443: connect: connection refused" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.342743 containerd[1620]: time="2025-09-13T00:10:20.342663182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-662926fb9e,Uid:b52c4868ad785a9ff0ee5eb30ba521c3,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:20.346976 containerd[1620]: time="2025-09-13T00:10:20.346486921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-662926fb9e,Uid:2d6323b5e48c2fc25ef1e5619e073aa8,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:20.346976 containerd[1620]: time="2025-09-13T00:10:20.346629830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-662926fb9e,Uid:bed4bcb68e8202e8ac9599eb94618787,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:20.518744 kubelet[2379]: E0913 00:10:20.518624 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.60.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-662926fb9e?timeout=10s\": dial tcp 65.21.60.153:6443: connect: connection refused" interval="800ms" Sep 13 00:10:20.661549 kubelet[2379]: I0913 00:10:20.661486 2379 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.661822 kubelet[2379]: E0913 00:10:20.661792 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://65.21.60.153:6443/api/v1/nodes\": dial tcp 65.21.60.153:6443: connect: connection refused" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:20.797621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136721009.mount: Deactivated successfully. Sep 13 00:10:20.807229 containerd[1620]: time="2025-09-13T00:10:20.807134796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:20.808266 containerd[1620]: time="2025-09-13T00:10:20.808183117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Sep 13 00:10:20.812376 containerd[1620]: time="2025-09-13T00:10:20.811944439Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:20.813677 containerd[1620]: time="2025-09-13T00:10:20.813561691Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:20.815236 containerd[1620]: time="2025-09-13T00:10:20.815145958Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:20.816513 containerd[1620]: time="2025-09-13T00:10:20.816458616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:10:20.817432 containerd[1620]: time="2025-09-13T00:10:20.817382743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:10:20.821293 containerd[1620]: time="2025-09-13T00:10:20.821235298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:20.825110 containerd[1620]: time="2025-09-13T00:10:20.824953589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.22931ms" Sep 13 00:10:20.830137 containerd[1620]: time="2025-09-13T00:10:20.830085186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.332055ms" Sep 13 00:10:20.831494 containerd[1620]: time="2025-09-13T00:10:20.831437990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.886397ms" Sep 13 00:10:20.931718 kubelet[2379]: W0913 00:10:20.931633 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.21.60.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-662926fb9e&limit=500&resourceVersion=0": dial tcp 65.21.60.153:6443: connect: connection refused Sep 13 00:10:20.931909 kubelet[2379]: E0913 00:10:20.931731 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.21.60.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-662926fb9e&limit=500&resourceVersion=0\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:20.972438 containerd[1620]: time="2025-09-13T00:10:20.972071481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:20.972438 containerd[1620]: time="2025-09-13T00:10:20.972128028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:20.972438 containerd[1620]: time="2025-09-13T00:10:20.972143236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:20.972438 containerd[1620]: time="2025-09-13T00:10:20.972243034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:20.977831 containerd[1620]: time="2025-09-13T00:10:20.977044952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:20.977831 containerd[1620]: time="2025-09-13T00:10:20.977121246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:20.977831 containerd[1620]: time="2025-09-13T00:10:20.977149599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:20.977831 containerd[1620]: time="2025-09-13T00:10:20.977283611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:20.978578 containerd[1620]: time="2025-09-13T00:10:20.978155299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:20.978578 containerd[1620]: time="2025-09-13T00:10:20.978208260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:20.978578 containerd[1620]: time="2025-09-13T00:10:20.978238907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:20.978578 containerd[1620]: time="2025-09-13T00:10:20.978334396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:21.011459 kubelet[2379]: W0913 00:10:21.011352 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.21.60.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.21.60.153:6443: connect: connection refused Sep 13 00:10:21.011459 kubelet[2379]: E0913 00:10:21.011419 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.21.60.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:21.059693 containerd[1620]: time="2025-09-13T00:10:21.059577685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-662926fb9e,Uid:2d6323b5e48c2fc25ef1e5619e073aa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6461d95810612c77a41359bc51800bbf3788710c6abb5cd18f1c127553fd59c6\"" Sep 13 00:10:21.065977 containerd[1620]: time="2025-09-13T00:10:21.065803489Z" level=info msg="CreateContainer within sandbox \"6461d95810612c77a41359bc51800bbf3788710c6abb5cd18f1c127553fd59c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:10:21.066506 containerd[1620]: time="2025-09-13T00:10:21.065894550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-662926fb9e,Uid:b52c4868ad785a9ff0ee5eb30ba521c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b77da7c348d91bc211ecde3db075083ed6bbb538e1c9cb32f18eabcba869f27\"" Sep 13 00:10:21.070006 containerd[1620]: time="2025-09-13T00:10:21.069923985Z" level=info msg="CreateContainer within sandbox \"4b77da7c348d91bc211ecde3db075083ed6bbb538e1c9cb32f18eabcba869f27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:10:21.085651 containerd[1620]: time="2025-09-13T00:10:21.085113428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-662926fb9e,Uid:bed4bcb68e8202e8ac9599eb94618787,Namespace:kube-system,Attempt:0,} returns sandbox id \"680c73636963867d5fa91665f81c6c84cfd04edcbf1429a8430f91ada94bc2b3\"" Sep 13 00:10:21.089481 containerd[1620]: time="2025-09-13T00:10:21.089444650Z" level=info msg="CreateContainer within sandbox \"680c73636963867d5fa91665f81c6c84cfd04edcbf1429a8430f91ada94bc2b3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:10:21.094287 containerd[1620]: time="2025-09-13T00:10:21.094251757Z" level=info msg="CreateContainer within sandbox \"4b77da7c348d91bc211ecde3db075083ed6bbb538e1c9cb32f18eabcba869f27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6be06a141a0f01ea9fe7f0111f9ce0b8bb6998b67f74e56e2a8a6b6413f15ab5\"" Sep 13 00:10:21.094747 containerd[1620]: time="2025-09-13T00:10:21.094715738Z" level=info msg="StartContainer for \"6be06a141a0f01ea9fe7f0111f9ce0b8bb6998b67f74e56e2a8a6b6413f15ab5\"" Sep 13 00:10:21.098183 containerd[1620]: time="2025-09-13T00:10:21.098151858Z" level=info msg="CreateContainer within sandbox \"6461d95810612c77a41359bc51800bbf3788710c6abb5cd18f1c127553fd59c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1fd96ddfe8978b6987a32811c1eaa9946d0a4aa8402270cf898b704a996b5baf\"" Sep 13 00:10:21.098492 containerd[1620]: time="2025-09-13T00:10:21.098466709Z" level=info msg="StartContainer for \"1fd96ddfe8978b6987a32811c1eaa9946d0a4aa8402270cf898b704a996b5baf\"" Sep 13 00:10:21.111297 containerd[1620]: time="2025-09-13T00:10:21.110937723Z" level=info msg="CreateContainer within sandbox \"680c73636963867d5fa91665f81c6c84cfd04edcbf1429a8430f91ada94bc2b3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed0325489bbaca79421fbd669e858bb421d994d76511076f3d03a8087642e5e2\"" Sep 13 00:10:21.112493 containerd[1620]: time="2025-09-13T00:10:21.112467448Z" level=info msg="StartContainer for \"ed0325489bbaca79421fbd669e858bb421d994d76511076f3d03a8087642e5e2\"" Sep 13 00:10:21.191911 containerd[1620]: time="2025-09-13T00:10:21.191880979Z" level=info msg="StartContainer for \"1fd96ddfe8978b6987a32811c1eaa9946d0a4aa8402270cf898b704a996b5baf\" returns successfully" Sep 13 00:10:21.201360 containerd[1620]: time="2025-09-13T00:10:21.201338839Z" level=info msg="StartContainer for \"6be06a141a0f01ea9fe7f0111f9ce0b8bb6998b67f74e56e2a8a6b6413f15ab5\" returns successfully" Sep 13 00:10:21.206552 kubelet[2379]: W0913 00:10:21.206427 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.21.60.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.21.60.153:6443: connect: connection refused Sep 13 00:10:21.206552 kubelet[2379]: E0913 00:10:21.206529 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.21.60.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:21.208220 containerd[1620]: time="2025-09-13T00:10:21.207636397Z" level=info msg="StartContainer for \"ed0325489bbaca79421fbd669e858bb421d994d76511076f3d03a8087642e5e2\" returns successfully" Sep 13 00:10:21.319938 kubelet[2379]: E0913 00:10:21.319707 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.60.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-662926fb9e?timeout=10s\": dial tcp 65.21.60.153:6443: connect: connection refused" interval="1.6s" Sep 13 00:10:21.406983 kubelet[2379]: W0913 00:10:21.406880 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.21.60.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.21.60.153:6443: connect: connection refused Sep 13 00:10:21.406983 kubelet[2379]: E0913 00:10:21.406954 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.21.60.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.21.60.153:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:21.465219 kubelet[2379]: I0913 00:10:21.464918 2379 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:21.465968 kubelet[2379]: E0913 00:10:21.465712 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://65.21.60.153:6443/api/v1/nodes\": dial tcp 65.21.60.153:6443: connect: connection refused" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:22.812912 kubelet[2379]: E0913 00:10:22.812859 2379 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-3-5-n-662926fb9e" not found Sep 13 00:10:22.923529 kubelet[2379]: E0913 00:10:22.923488 2379 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-662926fb9e\" not found" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:23.069483 kubelet[2379]: I0913 00:10:23.069345 2379 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:23.086548 kubelet[2379]: I0913 00:10:23.086497 2379 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:23.086548 kubelet[2379]: E0913 00:10:23.086536 2379 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-662926fb9e\": node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:23.103748 kubelet[2379]: E0913 00:10:23.103708 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:23.204813 kubelet[2379]: E0913 00:10:23.204763 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:23.305645 kubelet[2379]: E0913 00:10:23.305600 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:23.405927 kubelet[2379]: E0913 00:10:23.405769 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:23.506135 kubelet[2379]: E0913 00:10:23.506087 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:23.606500 kubelet[2379]: E0913 00:10:23.606457 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:23.896110 kubelet[2379]: I0913 00:10:23.895988 2379 apiserver.go:52] "Watching apiserver" Sep 13 00:10:23.919237 kubelet[2379]: I0913 00:10:23.919176 2379 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:10:24.645957 systemd[1]: Reloading requested from client PID 2654 ('systemctl') (unit session-7.scope)... Sep 13 00:10:24.645976 systemd[1]: Reloading... Sep 13 00:10:24.700558 zram_generator::config[2690]: No configuration found. Sep 13 00:10:24.789468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:24.852939 systemd[1]: Reloading finished in 206 ms. Sep 13 00:10:24.879199 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:24.893568 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:10:24.893785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:24.901551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:25.002003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:25.003680 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:10:25.061332 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:25.061332 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:10:25.061332 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:25.061332 kubelet[2755]: I0913 00:10:25.060958 2755 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:10:25.067828 kubelet[2755]: I0913 00:10:25.067803 2755 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:10:25.067944 kubelet[2755]: I0913 00:10:25.067931 2755 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:10:25.068237 kubelet[2755]: I0913 00:10:25.068206 2755 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:10:25.070623 kubelet[2755]: I0913 00:10:25.070607 2755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:10:25.078601 kubelet[2755]: I0913 00:10:25.078588 2755 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:25.088762 kubelet[2755]: E0913 00:10:25.088726 2755 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:25.088762 kubelet[2755]: I0913 00:10:25.088751 2755 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:25.091139 kubelet[2755]: I0913 00:10:25.091117 2755 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:25.091457 kubelet[2755]: I0913 00:10:25.091444 2755 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:10:25.091562 kubelet[2755]: I0913 00:10:25.091532 2755 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:25.091802 kubelet[2755]: I0913 00:10:25.091559 2755 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-662926fb9e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:10:25.091802 kubelet[2755]: I0913 00:10:25.091702 2755 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:25.091802 kubelet[2755]: I0913 00:10:25.091710 2755 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:10:25.091802 kubelet[2755]: I0913 00:10:25.091732 2755 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:25.094347 kubelet[2755]: I0913 00:10:25.093665 2755 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:10:25.094347 kubelet[2755]: I0913 00:10:25.093683 2755 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:25.094347 kubelet[2755]: I0913 00:10:25.093712 2755 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:10:25.094347 kubelet[2755]: I0913 00:10:25.093724 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:25.095487 kubelet[2755]: I0913 00:10:25.095475 2755 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:10:25.097050 kubelet[2755]: I0913 00:10:25.097035 2755 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:25.097497 kubelet[2755]: I0913 00:10:25.097487 2755 server.go:1274] "Started kubelet" Sep 13 00:10:25.099969 kubelet[2755]: I0913 00:10:25.099126 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:25.101846 kubelet[2755]: I0913 00:10:25.101812 2755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:25.103074 kubelet[2755]: I0913 00:10:25.103044 2755 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:10:25.107340 kubelet[2755]: I0913 00:10:25.106930 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:25.107340 kubelet[2755]: I0913 00:10:25.107128 2755 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:25.107534 kubelet[2755]: I0913 00:10:25.107518 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:25.117267 kubelet[2755]: I0913 00:10:25.117255 2755 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:10:25.117615 kubelet[2755]: E0913 00:10:25.117558 2755 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-662926fb9e\" not found" Sep 13 00:10:25.124042 kubelet[2755]: I0913 00:10:25.124028 2755 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:10:25.124181 kubelet[2755]: I0913 00:10:25.124173 2755 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:25.125347 kubelet[2755]: I0913 00:10:25.125280 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:25.125974 kubelet[2755]: I0913 00:10:25.125953 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:25.126008 kubelet[2755]: I0913 00:10:25.125976 2755 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:10:25.126008 kubelet[2755]: I0913 00:10:25.125989 2755 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:10:25.126043 kubelet[2755]: E0913 00:10:25.126019 2755 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:25.133740 kubelet[2755]: I0913 00:10:25.131301 2755 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:25.133812 kubelet[2755]: I0913 00:10:25.133787 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:25.136404 kubelet[2755]: I0913 00:10:25.136386 2755 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:25.137707 kubelet[2755]: E0913 00:10:25.137645 2755 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:25.186569 kubelet[2755]: I0913 00:10:25.186480 2755 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:10:25.187183 kubelet[2755]: I0913 00:10:25.186773 2755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:25.187183 kubelet[2755]: I0913 00:10:25.186796 2755 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:25.187183 kubelet[2755]: I0913 00:10:25.186956 2755 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:10:25.187183 kubelet[2755]: I0913 00:10:25.186968 2755 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:10:25.187183 kubelet[2755]: I0913 00:10:25.186992 2755 policy_none.go:49] "None policy: Start" Sep 13 00:10:25.188358 kubelet[2755]: I0913 00:10:25.187768 2755 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:10:25.188358 kubelet[2755]: I0913 00:10:25.187788 2755 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:25.188358 kubelet[2755]: I0913 00:10:25.187923 2755 state_mem.go:75] "Updated machine memory state" Sep 13 00:10:25.192134 kubelet[2755]: I0913 00:10:25.192121 2755 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:25.192359 kubelet[2755]: I0913 00:10:25.192348 2755 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:25.192435 kubelet[2755]: I0913 00:10:25.192413 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:25.192607 kubelet[2755]: I0913 00:10:25.192594 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:25.296541 kubelet[2755]: I0913 00:10:25.296473 2755 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.303803 kubelet[2755]: I0913 00:10:25.303731 2755 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.303803 kubelet[2755]: I0913 00:10:25.303801 2755 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325673 kubelet[2755]: I0913 00:10:25.325632 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325786 kubelet[2755]: I0913 00:10:25.325683 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bed4bcb68e8202e8ac9599eb94618787-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-662926fb9e\" (UID: \"bed4bcb68e8202e8ac9599eb94618787\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325786 kubelet[2755]: I0913 00:10:25.325710 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b52c4868ad785a9ff0ee5eb30ba521c3-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-662926fb9e\" (UID: \"b52c4868ad785a9ff0ee5eb30ba521c3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325786 kubelet[2755]: I0913 00:10:25.325740 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b52c4868ad785a9ff0ee5eb30ba521c3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-662926fb9e\" (UID: \"b52c4868ad785a9ff0ee5eb30ba521c3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325786 kubelet[2755]: I0913 00:10:25.325764 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325998 kubelet[2755]: I0913 00:10:25.325786 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325998 kubelet[2755]: I0913 00:10:25.325808 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325998 kubelet[2755]: I0913 00:10:25.325830 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b52c4868ad785a9ff0ee5eb30ba521c3-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-662926fb9e\" (UID: \"b52c4868ad785a9ff0ee5eb30ba521c3\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:25.325998 kubelet[2755]: I0913 00:10:25.325852 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2d6323b5e48c2fc25ef1e5619e073aa8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-662926fb9e\" (UID: \"2d6323b5e48c2fc25ef1e5619e073aa8\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:26.096816 kubelet[2755]: I0913 00:10:26.095302 2755 apiserver.go:52] "Watching apiserver" Sep 13 00:10:26.125042 kubelet[2755]: I0913 00:10:26.124937 2755 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:10:26.172679 kubelet[2755]: E0913 00:10:26.172540 2755 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-5-n-662926fb9e\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-662926fb9e" Sep 13 00:10:26.193558 kubelet[2755]: I0913 00:10:26.193473 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-662926fb9e" podStartSLOduration=1.193397209 podStartE2EDuration="1.193397209s" podCreationTimestamp="2025-09-13 00:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:26.192640948 +0000 UTC m=+1.184750305" watchObservedRunningTime="2025-09-13 00:10:26.193397209 +0000 UTC m=+1.185506535" Sep 13 00:10:26.204287 kubelet[2755]: I0913 00:10:26.204199 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-662926fb9e" podStartSLOduration=1.204181636 podStartE2EDuration="1.204181636s" podCreationTimestamp="2025-09-13 00:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:26.202811743 +0000 UTC m=+1.194921079" watchObservedRunningTime="2025-09-13 00:10:26.204181636 +0000 UTC m=+1.196290972" Sep 13 00:10:26.213303 kubelet[2755]: I0913 00:10:26.212749 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-662926fb9e" podStartSLOduration=1.212727298 podStartE2EDuration="1.212727298s" podCreationTimestamp="2025-09-13 00:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:26.212009951 +0000 UTC m=+1.204119277" watchObservedRunningTime="2025-09-13 00:10:26.212727298 +0000 UTC m=+1.204836635" Sep 13 00:10:27.940449 update_engine[1608]: I20250913 00:10:27.940362 1608 update_attempter.cc:509] Updating boot flags... Sep 13 00:10:28.017387 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2805) Sep 13 00:10:28.120442 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2809) Sep 13 00:10:28.183833 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2809) Sep 13 00:10:29.518463 kubelet[2755]: I0913 00:10:29.518403 2755 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:10:29.519685 kubelet[2755]: I0913 00:10:29.519220 2755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:10:29.520370 containerd[1620]: time="2025-09-13T00:10:29.518813184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:10:30.366496 kubelet[2755]: I0913 00:10:30.366456 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl2zn\" (UniqueName: \"kubernetes.io/projected/220e7fbc-a234-4e5b-8a92-2c6ff6066184-kube-api-access-vl2zn\") pod \"kube-proxy-6sd6j\" (UID: \"220e7fbc-a234-4e5b-8a92-2c6ff6066184\") " pod="kube-system/kube-proxy-6sd6j" Sep 13 00:10:30.366496 kubelet[2755]: I0913 00:10:30.366494 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/220e7fbc-a234-4e5b-8a92-2c6ff6066184-kube-proxy\") pod \"kube-proxy-6sd6j\" (UID: \"220e7fbc-a234-4e5b-8a92-2c6ff6066184\") " pod="kube-system/kube-proxy-6sd6j" Sep 13 00:10:30.366662 kubelet[2755]: I0913 00:10:30.366512 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/220e7fbc-a234-4e5b-8a92-2c6ff6066184-xtables-lock\") pod \"kube-proxy-6sd6j\" (UID: \"220e7fbc-a234-4e5b-8a92-2c6ff6066184\") " pod="kube-system/kube-proxy-6sd6j" Sep 13 00:10:30.366662 kubelet[2755]: I0913 00:10:30.366546 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/220e7fbc-a234-4e5b-8a92-2c6ff6066184-lib-modules\") pod \"kube-proxy-6sd6j\" (UID: \"220e7fbc-a234-4e5b-8a92-2c6ff6066184\") " pod="kube-system/kube-proxy-6sd6j" Sep 13 00:10:30.672868 kubelet[2755]: I0913 00:10:30.672383 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/88e5cbfc-b100-494c-930c-cd35623e9e19-var-lib-calico\") pod \"tigera-operator-58fc44c59b-nnqsl\" (UID: \"88e5cbfc-b100-494c-930c-cd35623e9e19\") " pod="tigera-operator/tigera-operator-58fc44c59b-nnqsl" Sep 13 00:10:30.672868 kubelet[2755]: I0913 00:10:30.672430 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgk8c\" (UniqueName: \"kubernetes.io/projected/88e5cbfc-b100-494c-930c-cd35623e9e19-kube-api-access-qgk8c\") pod \"tigera-operator-58fc44c59b-nnqsl\" (UID: \"88e5cbfc-b100-494c-930c-cd35623e9e19\") " pod="tigera-operator/tigera-operator-58fc44c59b-nnqsl" Sep 13 00:10:30.673411 containerd[1620]: time="2025-09-13T00:10:30.673180641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6sd6j,Uid:220e7fbc-a234-4e5b-8a92-2c6ff6066184,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:30.697982 containerd[1620]: time="2025-09-13T00:10:30.697514679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:30.697982 containerd[1620]: time="2025-09-13T00:10:30.697780739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:30.697982 containerd[1620]: time="2025-09-13T00:10:30.697905283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:30.698718 containerd[1620]: time="2025-09-13T00:10:30.698686671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:30.750676 containerd[1620]: time="2025-09-13T00:10:30.750609307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6sd6j,Uid:220e7fbc-a234-4e5b-8a92-2c6ff6066184,Namespace:kube-system,Attempt:0,} returns sandbox id \"419dddbb05ab97e4d97642951f3e4143ac2dcf2f28155b04e71c88d8ae9dce3f\"" Sep 13 00:10:30.753769 containerd[1620]: time="2025-09-13T00:10:30.753729405Z" level=info msg="CreateContainer within sandbox \"419dddbb05ab97e4d97642951f3e4143ac2dcf2f28155b04e71c88d8ae9dce3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:10:30.766714 containerd[1620]: time="2025-09-13T00:10:30.766568692Z" level=info msg="CreateContainer within sandbox \"419dddbb05ab97e4d97642951f3e4143ac2dcf2f28155b04e71c88d8ae9dce3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db584552340a625e54505842c7cc197ba9c21ad1846c61579c3c5d8461522f52\"" Sep 13 00:10:30.767593 containerd[1620]: time="2025-09-13T00:10:30.767557879Z" level=info msg="StartContainer for \"db584552340a625e54505842c7cc197ba9c21ad1846c61579c3c5d8461522f52\"" Sep 13 00:10:30.815773 containerd[1620]: time="2025-09-13T00:10:30.815677206Z" level=info msg="StartContainer for \"db584552340a625e54505842c7cc197ba9c21ad1846c61579c3c5d8461522f52\" returns successfully" Sep 13 00:10:30.848642 containerd[1620]: time="2025-09-13T00:10:30.848565285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nnqsl,Uid:88e5cbfc-b100-494c-930c-cd35623e9e19,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:10:30.908494 containerd[1620]: time="2025-09-13T00:10:30.908383977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:30.909759 containerd[1620]: time="2025-09-13T00:10:30.909296431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:30.909759 containerd[1620]: time="2025-09-13T00:10:30.909380719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:30.909759 containerd[1620]: time="2025-09-13T00:10:30.909512236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:30.979710 containerd[1620]: time="2025-09-13T00:10:30.978966329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nnqsl,Uid:88e5cbfc-b100-494c-930c-cd35623e9e19,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2cbdd929344063848d61c4e441d54457708eee9cc665f86a0326bbf57701f1c8\"" Sep 13 00:10:30.981938 containerd[1620]: time="2025-09-13T00:10:30.981677600Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:10:31.487303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017741979.mount: Deactivated successfully. Sep 13 00:10:32.740230 kubelet[2755]: I0913 00:10:32.739663 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6sd6j" podStartSLOduration=2.739645061 podStartE2EDuration="2.739645061s" podCreationTimestamp="2025-09-13 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:31.202938702 +0000 UTC m=+6.195048079" watchObservedRunningTime="2025-09-13 00:10:32.739645061 +0000 UTC m=+7.731754377" Sep 13 00:10:33.097443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469867348.mount: Deactivated successfully. Sep 13 00:10:33.483058 containerd[1620]: time="2025-09-13T00:10:33.482757922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:33.484001 containerd[1620]: time="2025-09-13T00:10:33.483570728Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:10:33.484665 containerd[1620]: time="2025-09-13T00:10:33.484602113Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:33.487068 containerd[1620]: time="2025-09-13T00:10:33.486809095Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:33.487678 containerd[1620]: time="2025-09-13T00:10:33.487645024Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.505944331s" Sep 13 00:10:33.487678 containerd[1620]: time="2025-09-13T00:10:33.487676474Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:10:33.490166 containerd[1620]: time="2025-09-13T00:10:33.489993774Z" level=info msg="CreateContainer within sandbox \"2cbdd929344063848d61c4e441d54457708eee9cc665f86a0326bbf57701f1c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:10:33.513759 containerd[1620]: time="2025-09-13T00:10:33.513690864Z" level=info msg="CreateContainer within sandbox \"2cbdd929344063848d61c4e441d54457708eee9cc665f86a0326bbf57701f1c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3c8c4736ec5ce4a2d45cd484d9c9461f1396927b1141d0049c7c10f9ff36a15e\"" Sep 13 00:10:33.515976 containerd[1620]: time="2025-09-13T00:10:33.515748076Z" level=info msg="StartContainer for \"3c8c4736ec5ce4a2d45cd484d9c9461f1396927b1141d0049c7c10f9ff36a15e\"" Sep 13 00:10:33.570689 containerd[1620]: time="2025-09-13T00:10:33.570646834Z" level=info msg="StartContainer for \"3c8c4736ec5ce4a2d45cd484d9c9461f1396927b1141d0049c7c10f9ff36a15e\" returns successfully" Sep 13 00:10:34.499590 kubelet[2755]: I0913 00:10:34.499504 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-nnqsl" podStartSLOduration=1.9917693380000001 podStartE2EDuration="4.499488101s" podCreationTimestamp="2025-09-13 00:10:30 +0000 UTC" firstStartedPulling="2025-09-13 00:10:30.981015888 +0000 UTC m=+5.973125204" lastFinishedPulling="2025-09-13 00:10:33.48873465 +0000 UTC m=+8.480843967" observedRunningTime="2025-09-13 00:10:34.201715861 +0000 UTC m=+9.193825187" watchObservedRunningTime="2025-09-13 00:10:34.499488101 +0000 UTC m=+9.491597427" Sep 13 00:10:39.177740 sudo[1876]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:39.337012 sshd[1862]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:39.340138 systemd[1]: sshd@6-65.21.60.153:22-147.75.109.163:47940.service: Deactivated successfully. Sep 13 00:10:39.345607 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:10:39.346098 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:10:39.350395 systemd-logind[1604]: Removed session 7. Sep 13 00:10:42.043951 kubelet[2755]: I0913 00:10:42.043916 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh7xh\" (UniqueName: \"kubernetes.io/projected/939f592a-3f8f-497e-8e2a-c3b0beeedeae-kube-api-access-lh7xh\") pod \"calico-typha-6d7c849999-nk5nt\" (UID: \"939f592a-3f8f-497e-8e2a-c3b0beeedeae\") " pod="calico-system/calico-typha-6d7c849999-nk5nt" Sep 13 00:10:42.043951 kubelet[2755]: I0913 00:10:42.043959 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939f592a-3f8f-497e-8e2a-c3b0beeedeae-tigera-ca-bundle\") pod \"calico-typha-6d7c849999-nk5nt\" (UID: \"939f592a-3f8f-497e-8e2a-c3b0beeedeae\") " pod="calico-system/calico-typha-6d7c849999-nk5nt" Sep 13 00:10:42.044395 kubelet[2755]: I0913 00:10:42.043976 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/939f592a-3f8f-497e-8e2a-c3b0beeedeae-typha-certs\") pod \"calico-typha-6d7c849999-nk5nt\" (UID: \"939f592a-3f8f-497e-8e2a-c3b0beeedeae\") " pod="calico-system/calico-typha-6d7c849999-nk5nt" Sep 13 00:10:42.254958 containerd[1620]: time="2025-09-13T00:10:42.254219829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d7c849999-nk5nt,Uid:939f592a-3f8f-497e-8e2a-c3b0beeedeae,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:42.283276 containerd[1620]: time="2025-09-13T00:10:42.283025879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:42.283276 containerd[1620]: time="2025-09-13T00:10:42.283086913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:42.283276 containerd[1620]: time="2025-09-13T00:10:42.283096822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:42.283276 containerd[1620]: time="2025-09-13T00:10:42.283163648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:42.345025 kubelet[2755]: I0913 00:10:42.344697 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-lib-modules\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345025 kubelet[2755]: I0913 00:10:42.344723 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-flexvol-driver-host\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345025 kubelet[2755]: I0913 00:10:42.344740 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-cni-bin-dir\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345025 kubelet[2755]: I0913 00:10:42.344753 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-cni-log-dir\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345025 kubelet[2755]: I0913 00:10:42.344797 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-tigera-ca-bundle\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345434 kubelet[2755]: I0913 00:10:42.344819 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-var-run-calico\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345434 kubelet[2755]: I0913 00:10:42.344832 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xksw8\" (UniqueName: \"kubernetes.io/projected/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-kube-api-access-xksw8\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345434 kubelet[2755]: I0913 00:10:42.344844 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-node-certs\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345434 kubelet[2755]: I0913 00:10:42.344855 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-xtables-lock\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345434 kubelet[2755]: I0913 00:10:42.344867 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-var-lib-calico\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345532 kubelet[2755]: I0913 00:10:42.344879 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-cni-net-dir\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.345532 kubelet[2755]: I0913 00:10:42.344890 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508-policysync\") pod \"calico-node-z5zd6\" (UID: \"d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508\") " pod="calico-system/calico-node-z5zd6" Sep 13 00:10:42.367090 containerd[1620]: time="2025-09-13T00:10:42.367045711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d7c849999-nk5nt,Uid:939f592a-3f8f-497e-8e2a-c3b0beeedeae,Namespace:calico-system,Attempt:0,} returns sandbox id \"6268e3a9308400a2ffce26305ede6751c60f9dda17270ccfd0a5d3cce3521d6d\"" Sep 13 00:10:42.369457 containerd[1620]: time="2025-09-13T00:10:42.369239947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:10:42.448664 kubelet[2755]: E0913 00:10:42.448387 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.448664 kubelet[2755]: W0913 00:10:42.448452 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.448664 kubelet[2755]: E0913 00:10:42.448554 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.450471 kubelet[2755]: E0913 00:10:42.450441 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.450566 kubelet[2755]: W0913 00:10:42.450479 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.450566 kubelet[2755]: E0913 00:10:42.450499 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.450710 kubelet[2755]: E0913 00:10:42.450697 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.450746 kubelet[2755]: W0913 00:10:42.450710 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.450816 kubelet[2755]: E0913 00:10:42.450779 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.450907 kubelet[2755]: E0913 00:10:42.450885 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.450907 kubelet[2755]: W0913 00:10:42.450902 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.451058 kubelet[2755]: E0913 00:10:42.451021 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.451623 kubelet[2755]: E0913 00:10:42.451611 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.451623 kubelet[2755]: W0913 00:10:42.451622 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.451724 kubelet[2755]: E0913 00:10:42.451638 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.452269 kubelet[2755]: E0913 00:10:42.452250 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.452269 kubelet[2755]: W0913 00:10:42.452263 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.452372 kubelet[2755]: E0913 00:10:42.452278 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.455029 kubelet[2755]: E0913 00:10:42.455008 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.455029 kubelet[2755]: W0913 00:10:42.455023 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.455130 kubelet[2755]: E0913 00:10:42.455115 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.456050 kubelet[2755]: E0913 00:10:42.455515 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.456050 kubelet[2755]: W0913 00:10:42.455526 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.456050 kubelet[2755]: E0913 00:10:42.455537 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.456830 kubelet[2755]: E0913 00:10:42.456623 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.456830 kubelet[2755]: W0913 00:10:42.456635 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.456830 kubelet[2755]: E0913 00:10:42.456647 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.458392 kubelet[2755]: E0913 00:10:42.458368 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.458392 kubelet[2755]: W0913 00:10:42.458385 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.458472 kubelet[2755]: E0913 00:10:42.458396 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.459724 kubelet[2755]: E0913 00:10:42.459700 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.459724 kubelet[2755]: W0913 00:10:42.459717 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.459724 kubelet[2755]: E0913 00:10:42.459727 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.538682 kubelet[2755]: E0913 00:10:42.538492 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rrtz" podUID="f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2" Sep 13 00:10:42.541729 containerd[1620]: time="2025-09-13T00:10:42.541684115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z5zd6,Uid:d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:42.553050 kubelet[2755]: E0913 00:10:42.553018 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.554335 kubelet[2755]: W0913 00:10:42.553332 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.554335 kubelet[2755]: E0913 00:10:42.553360 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.555915 kubelet[2755]: E0913 00:10:42.554998 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.555915 kubelet[2755]: W0913 00:10:42.555026 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.555915 kubelet[2755]: E0913 00:10:42.555040 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.557398 kubelet[2755]: E0913 00:10:42.556443 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.557398 kubelet[2755]: W0913 00:10:42.556452 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.557398 kubelet[2755]: E0913 00:10:42.556465 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.557694 kubelet[2755]: E0913 00:10:42.557623 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.557694 kubelet[2755]: W0913 00:10:42.557638 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.557694 kubelet[2755]: E0913 00:10:42.557653 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.558153 kubelet[2755]: E0913 00:10:42.558067 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.558153 kubelet[2755]: W0913 00:10:42.558076 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.558153 kubelet[2755]: E0913 00:10:42.558086 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.558972 kubelet[2755]: E0913 00:10:42.558551 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.558972 kubelet[2755]: W0913 00:10:42.558561 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.558972 kubelet[2755]: E0913 00:10:42.558570 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.560339 kubelet[2755]: E0913 00:10:42.559403 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.560339 kubelet[2755]: W0913 00:10:42.559416 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.560339 kubelet[2755]: E0913 00:10:42.559428 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.560805 kubelet[2755]: E0913 00:10:42.560701 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.560805 kubelet[2755]: W0913 00:10:42.560711 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.560805 kubelet[2755]: E0913 00:10:42.560720 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.563546 kubelet[2755]: E0913 00:10:42.563247 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.563546 kubelet[2755]: W0913 00:10:42.563259 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.563546 kubelet[2755]: E0913 00:10:42.563270 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.565301 kubelet[2755]: E0913 00:10:42.565251 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.565301 kubelet[2755]: W0913 00:10:42.565262 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.565301 kubelet[2755]: E0913 00:10:42.565272 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.565884 kubelet[2755]: E0913 00:10:42.565612 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.565884 kubelet[2755]: W0913 00:10:42.565621 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.565884 kubelet[2755]: E0913 00:10:42.565629 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.566342 kubelet[2755]: E0913 00:10:42.566024 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.566342 kubelet[2755]: W0913 00:10:42.566034 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.566342 kubelet[2755]: E0913 00:10:42.566042 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.567478 kubelet[2755]: E0913 00:10:42.567460 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.567478 kubelet[2755]: W0913 00:10:42.567475 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.567615 kubelet[2755]: E0913 00:10:42.567484 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.567615 kubelet[2755]: E0913 00:10:42.567600 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.567615 kubelet[2755]: W0913 00:10:42.567606 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.567615 kubelet[2755]: E0913 00:10:42.567613 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.568187 kubelet[2755]: E0913 00:10:42.567761 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.568187 kubelet[2755]: W0913 00:10:42.567768 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.568187 kubelet[2755]: E0913 00:10:42.567774 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.568187 kubelet[2755]: E0913 00:10:42.567883 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.568187 kubelet[2755]: W0913 00:10:42.567890 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.568187 kubelet[2755]: E0913 00:10:42.567896 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.568187 kubelet[2755]: E0913 00:10:42.568018 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.568187 kubelet[2755]: W0913 00:10:42.568025 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.568187 kubelet[2755]: E0913 00:10:42.568031 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.568187 kubelet[2755]: E0913 00:10:42.568134 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.569110 kubelet[2755]: W0913 00:10:42.568140 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.569110 kubelet[2755]: E0913 00:10:42.568146 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.569110 kubelet[2755]: E0913 00:10:42.568249 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.569110 kubelet[2755]: W0913 00:10:42.568255 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.569110 kubelet[2755]: E0913 00:10:42.568261 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.569110 kubelet[2755]: E0913 00:10:42.568399 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.569110 kubelet[2755]: W0913 00:10:42.568405 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.569110 kubelet[2755]: E0913 00:10:42.568411 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.581585 containerd[1620]: time="2025-09-13T00:10:42.581346211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:42.581970 containerd[1620]: time="2025-09-13T00:10:42.581944124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:42.582283 containerd[1620]: time="2025-09-13T00:10:42.582186878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:42.582519 containerd[1620]: time="2025-09-13T00:10:42.582496299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:42.638084 containerd[1620]: time="2025-09-13T00:10:42.637981891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z5zd6,Uid:d2bb5ca7-7c5e-4a49-b2a3-7d5f506e0508,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6f75dfb9d103e6421d477dbe944ed844620b9328e5203de3010f2d3c389ef8c\"" Sep 13 00:10:42.649787 kubelet[2755]: E0913 00:10:42.648956 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.649787 kubelet[2755]: W0913 00:10:42.648986 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.649787 kubelet[2755]: E0913 00:10:42.649009 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.649787 kubelet[2755]: I0913 00:10:42.649038 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxdxb\" (UniqueName: \"kubernetes.io/projected/f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2-kube-api-access-fxdxb\") pod \"csi-node-driver-2rrtz\" (UID: \"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2\") " pod="calico-system/csi-node-driver-2rrtz" Sep 13 00:10:42.651731 kubelet[2755]: E0913 00:10:42.651037 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.651731 kubelet[2755]: W0913 00:10:42.651051 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.653518 kubelet[2755]: E0913 00:10:42.653104 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.653518 kubelet[2755]: W0913 00:10:42.653116 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.653518 kubelet[2755]: E0913 00:10:42.653130 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.655345 kubelet[2755]: E0913 00:10:42.654123 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.655345 kubelet[2755]: I0913 00:10:42.654153 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2-registration-dir\") pod \"csi-node-driver-2rrtz\" (UID: \"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2\") " pod="calico-system/csi-node-driver-2rrtz" Sep 13 00:10:42.656149 kubelet[2755]: E0913 00:10:42.655985 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.656149 kubelet[2755]: W0913 00:10:42.655995 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.656149 kubelet[2755]: E0913 00:10:42.656017 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.659119 kubelet[2755]: E0913 00:10:42.658287 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.659119 kubelet[2755]: W0913 00:10:42.658323 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.660481 kubelet[2755]: E0913 00:10:42.660126 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.660481 kubelet[2755]: I0913 00:10:42.660158 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2-socket-dir\") pod \"csi-node-driver-2rrtz\" (UID: \"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2\") " pod="calico-system/csi-node-driver-2rrtz" Sep 13 00:10:42.664492 kubelet[2755]: E0913 00:10:42.663491 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.664492 kubelet[2755]: W0913 00:10:42.663503 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.667880 kubelet[2755]: E0913 00:10:42.667777 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.668229 kubelet[2755]: W0913 00:10:42.668147 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.668984 kubelet[2755]: E0913 00:10:42.668517 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.669477 kubelet[2755]: E0913 00:10:42.669341 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.670414 kubelet[2755]: I0913 00:10:42.670209 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2-varrun\") pod \"csi-node-driver-2rrtz\" (UID: \"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2\") " pod="calico-system/csi-node-driver-2rrtz" Sep 13 00:10:42.673752 kubelet[2755]: E0913 00:10:42.673385 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.673752 kubelet[2755]: W0913 00:10:42.673396 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.673752 kubelet[2755]: E0913 00:10:42.673408 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.676650 kubelet[2755]: E0913 00:10:42.676559 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.676650 kubelet[2755]: W0913 00:10:42.676570 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.676650 kubelet[2755]: E0913 00:10:42.676586 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.676878 kubelet[2755]: E0913 00:10:42.676850 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.676878 kubelet[2755]: W0913 00:10:42.676859 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.677106 kubelet[2755]: E0913 00:10:42.677022 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.677198 kubelet[2755]: E0913 00:10:42.677178 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.677198 kubelet[2755]: W0913 00:10:42.677186 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.677368 kubelet[2755]: E0913 00:10:42.677327 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.677571 kubelet[2755]: E0913 00:10:42.677484 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.677571 kubelet[2755]: W0913 00:10:42.677492 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.677571 kubelet[2755]: E0913 00:10:42.677513 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.677571 kubelet[2755]: I0913 00:10:42.677533 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2-kubelet-dir\") pod \"csi-node-driver-2rrtz\" (UID: \"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2\") " pod="calico-system/csi-node-driver-2rrtz" Sep 13 00:10:42.677839 kubelet[2755]: E0913 00:10:42.677765 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.677839 kubelet[2755]: W0913 00:10:42.677773 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.677839 kubelet[2755]: E0913 00:10:42.677782 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.678084 kubelet[2755]: E0913 00:10:42.678022 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.678084 kubelet[2755]: W0913 00:10:42.678030 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.678084 kubelet[2755]: E0913 00:10:42.678037 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.678263 kubelet[2755]: E0913 00:10:42.678234 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.678263 kubelet[2755]: W0913 00:10:42.678243 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.678263 kubelet[2755]: E0913 00:10:42.678250 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.779143 kubelet[2755]: E0913 00:10:42.778990 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.779143 kubelet[2755]: W0913 00:10:42.779009 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.779143 kubelet[2755]: E0913 00:10:42.779029 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.779612 kubelet[2755]: E0913 00:10:42.779441 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.779612 kubelet[2755]: W0913 00:10:42.779454 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.779612 kubelet[2755]: E0913 00:10:42.779467 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.779872 kubelet[2755]: E0913 00:10:42.779752 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.779872 kubelet[2755]: W0913 00:10:42.779765 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.779872 kubelet[2755]: E0913 00:10:42.779776 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.780506 kubelet[2755]: E0913 00:10:42.780134 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.780506 kubelet[2755]: W0913 00:10:42.780144 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.780506 kubelet[2755]: E0913 00:10:42.780160 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.780506 kubelet[2755]: E0913 00:10:42.780404 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.780506 kubelet[2755]: W0913 00:10:42.780422 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.780506 kubelet[2755]: E0913 00:10:42.780440 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.780800 kubelet[2755]: E0913 00:10:42.780786 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.780800 kubelet[2755]: W0913 00:10:42.780798 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.780883 kubelet[2755]: E0913 00:10:42.780819 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.781046 kubelet[2755]: E0913 00:10:42.781029 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.781046 kubelet[2755]: W0913 00:10:42.781045 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.781190 kubelet[2755]: E0913 00:10:42.781136 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.781235 kubelet[2755]: E0913 00:10:42.781205 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.781235 kubelet[2755]: W0913 00:10:42.781212 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.781337 kubelet[2755]: E0913 00:10:42.781329 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.781491 kubelet[2755]: E0913 00:10:42.781477 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.781538 kubelet[2755]: W0913 00:10:42.781492 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.781641 kubelet[2755]: E0913 00:10:42.781589 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.781684 kubelet[2755]: E0913 00:10:42.781669 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.781684 kubelet[2755]: W0913 00:10:42.781677 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.781750 kubelet[2755]: E0913 00:10:42.781743 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.781893 kubelet[2755]: E0913 00:10:42.781875 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.781893 kubelet[2755]: W0913 00:10:42.781887 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.782023 kubelet[2755]: E0913 00:10:42.781971 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.782066 kubelet[2755]: E0913 00:10:42.782047 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.782066 kubelet[2755]: W0913 00:10:42.782055 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.782140 kubelet[2755]: E0913 00:10:42.782066 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.782358 kubelet[2755]: E0913 00:10:42.782343 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.782358 kubelet[2755]: W0913 00:10:42.782354 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.782515 kubelet[2755]: E0913 00:10:42.782363 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.782560 kubelet[2755]: E0913 00:10:42.782533 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.782560 kubelet[2755]: W0913 00:10:42.782542 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.782560 kubelet[2755]: E0913 00:10:42.782553 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.782767 kubelet[2755]: E0913 00:10:42.782753 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.782767 kubelet[2755]: W0913 00:10:42.782764 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.782922 kubelet[2755]: E0913 00:10:42.782831 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.782922 kubelet[2755]: E0913 00:10:42.782916 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.782989 kubelet[2755]: W0913 00:10:42.782923 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.783020 kubelet[2755]: E0913 00:10:42.782990 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.783155 kubelet[2755]: E0913 00:10:42.783144 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.783155 kubelet[2755]: W0913 00:10:42.783153 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.783255 kubelet[2755]: E0913 00:10:42.783241 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.783454 kubelet[2755]: E0913 00:10:42.783430 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.783454 kubelet[2755]: W0913 00:10:42.783444 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.783599 kubelet[2755]: E0913 00:10:42.783538 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.783646 kubelet[2755]: E0913 00:10:42.783638 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.783678 kubelet[2755]: W0913 00:10:42.783646 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.783678 kubelet[2755]: E0913 00:10:42.783657 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.784892 kubelet[2755]: E0913 00:10:42.784869 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.784892 kubelet[2755]: W0913 00:10:42.784885 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.784978 kubelet[2755]: E0913 00:10:42.784907 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.785217 kubelet[2755]: E0913 00:10:42.785142 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.785217 kubelet[2755]: W0913 00:10:42.785156 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.785492 kubelet[2755]: E0913 00:10:42.785462 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.785730 kubelet[2755]: E0913 00:10:42.785706 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.785730 kubelet[2755]: W0913 00:10:42.785721 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.785842 kubelet[2755]: E0913 00:10:42.785811 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.786289 kubelet[2755]: E0913 00:10:42.786271 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.786289 kubelet[2755]: W0913 00:10:42.786283 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.788461 kubelet[2755]: E0913 00:10:42.788395 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.788837 kubelet[2755]: E0913 00:10:42.788805 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.788837 kubelet[2755]: W0913 00:10:42.788816 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.788837 kubelet[2755]: E0913 00:10:42.788827 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.789971 kubelet[2755]: E0913 00:10:42.789940 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.789971 kubelet[2755]: W0913 00:10:42.789959 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.789971 kubelet[2755]: E0913 00:10:42.789969 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:42.793572 kubelet[2755]: E0913 00:10:42.793482 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:42.793572 kubelet[2755]: W0913 00:10:42.793513 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:42.793572 kubelet[2755]: E0913 00:10:42.793526 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:43.155826 systemd[1]: run-containerd-runc-k8s.io-6268e3a9308400a2ffce26305ede6751c60f9dda17270ccfd0a5d3cce3521d6d-runc.aBG8VS.mount: Deactivated successfully. Sep 13 00:10:44.127293 kubelet[2755]: E0913 00:10:44.127241 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rrtz" podUID="f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2" Sep 13 00:10:44.203940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037359637.mount: Deactivated successfully. Sep 13 00:10:45.312103 containerd[1620]: time="2025-09-13T00:10:45.312050821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:45.313965 containerd[1620]: time="2025-09-13T00:10:45.313920177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:10:45.318111 containerd[1620]: time="2025-09-13T00:10:45.318075352Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:45.320161 containerd[1620]: time="2025-09-13T00:10:45.320124215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:45.320595 containerd[1620]: time="2025-09-13T00:10:45.320562006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.951277365s" Sep 13 00:10:45.320642 containerd[1620]: time="2025-09-13T00:10:45.320597733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:10:45.324167 containerd[1620]: time="2025-09-13T00:10:45.323127138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:10:45.334104 containerd[1620]: time="2025-09-13T00:10:45.334084702Z" level=info msg="CreateContainer within sandbox \"6268e3a9308400a2ffce26305ede6751c60f9dda17270ccfd0a5d3cce3521d6d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:10:45.348931 containerd[1620]: time="2025-09-13T00:10:45.348891877Z" level=info msg="CreateContainer within sandbox \"6268e3a9308400a2ffce26305ede6751c60f9dda17270ccfd0a5d3cce3521d6d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a4f44e2c44afc1880da0018f01bd01e791cb351444991059d0395ac5736f0adb\"" Sep 13 00:10:45.349826 containerd[1620]: time="2025-09-13T00:10:45.349743795Z" level=info msg="StartContainer for \"a4f44e2c44afc1880da0018f01bd01e791cb351444991059d0395ac5736f0adb\"" Sep 13 00:10:45.416405 containerd[1620]: time="2025-09-13T00:10:45.416251007Z" level=info msg="StartContainer for \"a4f44e2c44afc1880da0018f01bd01e791cb351444991059d0395ac5736f0adb\" returns successfully" Sep 13 00:10:46.126970 kubelet[2755]: E0913 00:10:46.126909 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rrtz" podUID="f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2" Sep 13 00:10:46.298135 kubelet[2755]: E0913 00:10:46.298095 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.298135 kubelet[2755]: W0913 00:10:46.298124 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.298293 kubelet[2755]: E0913 00:10:46.298153 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.298476 kubelet[2755]: E0913 00:10:46.298451 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.298476 kubelet[2755]: W0913 00:10:46.298468 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.299133 kubelet[2755]: E0913 00:10:46.298482 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.299133 kubelet[2755]: E0913 00:10:46.298643 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.299133 kubelet[2755]: W0913 00:10:46.298651 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.299133 kubelet[2755]: E0913 00:10:46.298661 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.299133 kubelet[2755]: E0913 00:10:46.298824 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.299133 kubelet[2755]: W0913 00:10:46.298835 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.299133 kubelet[2755]: E0913 00:10:46.298848 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.299133 kubelet[2755]: E0913 00:10:46.299060 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.299133 kubelet[2755]: W0913 00:10:46.299069 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.299133 kubelet[2755]: E0913 00:10:46.299079 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.299375 kubelet[2755]: E0913 00:10:46.299334 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.299375 kubelet[2755]: W0913 00:10:46.299344 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.299375 kubelet[2755]: E0913 00:10:46.299356 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.299921 kubelet[2755]: E0913 00:10:46.299561 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.299921 kubelet[2755]: W0913 00:10:46.299572 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.299921 kubelet[2755]: E0913 00:10:46.299597 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.299921 kubelet[2755]: E0913 00:10:46.299814 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.299921 kubelet[2755]: W0913 00:10:46.299842 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.299921 kubelet[2755]: E0913 00:10:46.299851 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.300104 kubelet[2755]: E0913 00:10:46.300064 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.300104 kubelet[2755]: W0913 00:10:46.300073 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.300104 kubelet[2755]: E0913 00:10:46.300081 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.300334 kubelet[2755]: E0913 00:10:46.300296 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.300438 kubelet[2755]: W0913 00:10:46.300403 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.300438 kubelet[2755]: E0913 00:10:46.300423 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.300639 kubelet[2755]: E0913 00:10:46.300618 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.300639 kubelet[2755]: W0913 00:10:46.300632 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.300639 kubelet[2755]: E0913 00:10:46.300641 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.300827 kubelet[2755]: E0913 00:10:46.300804 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.300827 kubelet[2755]: W0913 00:10:46.300818 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.300827 kubelet[2755]: E0913 00:10:46.300827 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.301033 kubelet[2755]: E0913 00:10:46.301021 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.301033 kubelet[2755]: W0913 00:10:46.301032 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.301119 kubelet[2755]: E0913 00:10:46.301040 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.301266 kubelet[2755]: E0913 00:10:46.301236 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.301266 kubelet[2755]: W0913 00:10:46.301260 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.301385 kubelet[2755]: E0913 00:10:46.301272 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.301516 kubelet[2755]: E0913 00:10:46.301491 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.301516 kubelet[2755]: W0913 00:10:46.301505 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.301516 kubelet[2755]: E0913 00:10:46.301515 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.303812 kubelet[2755]: E0913 00:10:46.303788 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.303812 kubelet[2755]: W0913 00:10:46.303803 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.303901 kubelet[2755]: E0913 00:10:46.303813 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.304064 kubelet[2755]: E0913 00:10:46.304037 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.304064 kubelet[2755]: W0913 00:10:46.304053 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.304159 kubelet[2755]: E0913 00:10:46.304070 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.304271 kubelet[2755]: E0913 00:10:46.304253 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.304271 kubelet[2755]: W0913 00:10:46.304266 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.304412 kubelet[2755]: E0913 00:10:46.304280 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.304569 kubelet[2755]: E0913 00:10:46.304545 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.304569 kubelet[2755]: W0913 00:10:46.304561 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.304650 kubelet[2755]: E0913 00:10:46.304576 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.304796 kubelet[2755]: E0913 00:10:46.304774 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.304796 kubelet[2755]: W0913 00:10:46.304790 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.304912 kubelet[2755]: E0913 00:10:46.304805 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.304994 kubelet[2755]: E0913 00:10:46.304973 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.304994 kubelet[2755]: W0913 00:10:46.304986 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.305133 kubelet[2755]: E0913 00:10:46.305002 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.305213 kubelet[2755]: E0913 00:10:46.305190 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.305213 kubelet[2755]: W0913 00:10:46.305206 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.305335 kubelet[2755]: E0913 00:10:46.305220 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.305437 kubelet[2755]: E0913 00:10:46.305413 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.305437 kubelet[2755]: W0913 00:10:46.305426 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.305437 kubelet[2755]: E0913 00:10:46.305435 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.306050 kubelet[2755]: E0913 00:10:46.305606 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.306050 kubelet[2755]: W0913 00:10:46.305616 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.306050 kubelet[2755]: E0913 00:10:46.305632 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.306050 kubelet[2755]: E0913 00:10:46.305824 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.306050 kubelet[2755]: W0913 00:10:46.305832 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.306050 kubelet[2755]: E0913 00:10:46.305846 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.306262 kubelet[2755]: E0913 00:10:46.306171 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.306262 kubelet[2755]: W0913 00:10:46.306179 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.306262 kubelet[2755]: E0913 00:10:46.306200 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.306412 kubelet[2755]: E0913 00:10:46.306406 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.306445 kubelet[2755]: W0913 00:10:46.306414 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.306445 kubelet[2755]: E0913 00:10:46.306434 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.306648 kubelet[2755]: E0913 00:10:46.306626 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.306648 kubelet[2755]: W0913 00:10:46.306641 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.306758 kubelet[2755]: E0913 00:10:46.306652 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.307126 kubelet[2755]: E0913 00:10:46.307106 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.307126 kubelet[2755]: W0913 00:10:46.307120 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.307208 kubelet[2755]: E0913 00:10:46.307186 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.307366 kubelet[2755]: E0913 00:10:46.307345 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.307366 kubelet[2755]: W0913 00:10:46.307360 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.307443 kubelet[2755]: E0913 00:10:46.307371 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.307557 kubelet[2755]: E0913 00:10:46.307533 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.307557 kubelet[2755]: W0913 00:10:46.307549 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.307557 kubelet[2755]: E0913 00:10:46.307559 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.307757 kubelet[2755]: E0913 00:10:46.307736 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.307757 kubelet[2755]: W0913 00:10:46.307751 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.307829 kubelet[2755]: E0913 00:10:46.307762 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:46.308078 kubelet[2755]: E0913 00:10:46.308059 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:46.308078 kubelet[2755]: W0913 00:10:46.308073 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:46.308139 kubelet[2755]: E0913 00:10:46.308083 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:47.127693 containerd[1620]: time="2025-09-13T00:10:47.127629673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:47.128885 containerd[1620]: time="2025-09-13T00:10:47.128836256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:10:47.129220 containerd[1620]: time="2025-09-13T00:10:47.129200579Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:47.131121 containerd[1620]: time="2025-09-13T00:10:47.131071849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:47.131701 containerd[1620]: time="2025-09-13T00:10:47.131665363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.808494243s" Sep 13 00:10:47.131804 containerd[1620]: time="2025-09-13T00:10:47.131785628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:10:47.134901 containerd[1620]: time="2025-09-13T00:10:47.134866346Z" level=info msg="CreateContainer within sandbox \"e6f75dfb9d103e6421d477dbe944ed844620b9328e5203de3010f2d3c389ef8c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:10:47.157124 containerd[1620]: time="2025-09-13T00:10:47.157057800Z" level=info msg="CreateContainer within sandbox \"e6f75dfb9d103e6421d477dbe944ed844620b9328e5203de3010f2d3c389ef8c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"866cff49b1e5a29f7903a5f8b70ab7c4d7a9aa1e2535af96543f84c4a4b2978f\"" Sep 13 00:10:47.158375 containerd[1620]: time="2025-09-13T00:10:47.158296202Z" level=info msg="StartContainer for \"866cff49b1e5a29f7903a5f8b70ab7c4d7a9aa1e2535af96543f84c4a4b2978f\"" Sep 13 00:10:47.228377 containerd[1620]: time="2025-09-13T00:10:47.227225239Z" level=info msg="StartContainer for \"866cff49b1e5a29f7903a5f8b70ab7c4d7a9aa1e2535af96543f84c4a4b2978f\" returns successfully" Sep 13 00:10:47.242800 kubelet[2755]: I0913 00:10:47.242482 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:10:47.262999 kubelet[2755]: I0913 00:10:47.262792 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d7c849999-nk5nt" podStartSLOduration=3.309996046 podStartE2EDuration="6.26277614s" podCreationTimestamp="2025-09-13 00:10:41 +0000 UTC" firstStartedPulling="2025-09-13 00:10:42.368704543 +0000 UTC m=+17.360813859" lastFinishedPulling="2025-09-13 00:10:45.321484637 +0000 UTC m=+20.313593953" observedRunningTime="2025-09-13 00:10:46.248454421 +0000 UTC m=+21.240563747" watchObservedRunningTime="2025-09-13 00:10:47.26277614 +0000 UTC m=+22.254885466" Sep 13 00:10:47.273061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-866cff49b1e5a29f7903a5f8b70ab7c4d7a9aa1e2535af96543f84c4a4b2978f-rootfs.mount: Deactivated successfully. Sep 13 00:10:47.289829 containerd[1620]: time="2025-09-13T00:10:47.284763230Z" level=info msg="shim disconnected" id=866cff49b1e5a29f7903a5f8b70ab7c4d7a9aa1e2535af96543f84c4a4b2978f namespace=k8s.io Sep 13 00:10:47.289829 containerd[1620]: time="2025-09-13T00:10:47.289764541Z" level=warning msg="cleaning up after shim disconnected" id=866cff49b1e5a29f7903a5f8b70ab7c4d7a9aa1e2535af96543f84c4a4b2978f namespace=k8s.io Sep 13 00:10:47.289829 containerd[1620]: time="2025-09-13T00:10:47.289783386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:48.127059 kubelet[2755]: E0913 00:10:48.127008 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rrtz" podUID="f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2" Sep 13 00:10:48.257169 containerd[1620]: time="2025-09-13T00:10:48.255520414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:10:50.126741 kubelet[2755]: E0913 00:10:50.126686 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rrtz" podUID="f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2" Sep 13 00:10:52.126487 kubelet[2755]: E0913 00:10:52.126288 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rrtz" podUID="f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2" Sep 13 00:10:52.344525 containerd[1620]: time="2025-09-13T00:10:52.344469438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.345769 containerd[1620]: time="2025-09-13T00:10:52.345623953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:10:52.346571 containerd[1620]: time="2025-09-13T00:10:52.346524582Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.348498 containerd[1620]: time="2025-09-13T00:10:52.348158586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.349060 containerd[1620]: time="2025-09-13T00:10:52.349029770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.093463861s" Sep 13 00:10:52.349105 containerd[1620]: time="2025-09-13T00:10:52.349059306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:10:52.352412 containerd[1620]: time="2025-09-13T00:10:52.352382257Z" level=info msg="CreateContainer within sandbox \"e6f75dfb9d103e6421d477dbe944ed844620b9328e5203de3010f2d3c389ef8c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:10:52.372553 containerd[1620]: time="2025-09-13T00:10:52.372496129Z" level=info msg="CreateContainer within sandbox \"e6f75dfb9d103e6421d477dbe944ed844620b9328e5203de3010f2d3c389ef8c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f6d4ea41194c8a89f85b81cb74f16fbce7ba11dd28a44852ebe6dd380d73acc4\"" Sep 13 00:10:52.373461 containerd[1620]: time="2025-09-13T00:10:52.373425261Z" level=info msg="StartContainer for \"f6d4ea41194c8a89f85b81cb74f16fbce7ba11dd28a44852ebe6dd380d73acc4\"" Sep 13 00:10:52.428102 containerd[1620]: time="2025-09-13T00:10:52.428001893Z" level=info msg="StartContainer for \"f6d4ea41194c8a89f85b81cb74f16fbce7ba11dd28a44852ebe6dd380d73acc4\" returns successfully" Sep 13 00:10:52.874652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6d4ea41194c8a89f85b81cb74f16fbce7ba11dd28a44852ebe6dd380d73acc4-rootfs.mount: Deactivated successfully. Sep 13 00:10:52.879359 containerd[1620]: time="2025-09-13T00:10:52.879252682Z" level=info msg="shim disconnected" id=f6d4ea41194c8a89f85b81cb74f16fbce7ba11dd28a44852ebe6dd380d73acc4 namespace=k8s.io Sep 13 00:10:52.879483 containerd[1620]: time="2025-09-13T00:10:52.879356587Z" level=warning msg="cleaning up after shim disconnected" id=f6d4ea41194c8a89f85b81cb74f16fbce7ba11dd28a44852ebe6dd380d73acc4 namespace=k8s.io Sep 13 00:10:52.879483 containerd[1620]: time="2025-09-13T00:10:52.879373799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:52.893614 containerd[1620]: time="2025-09-13T00:10:52.893579951Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:10:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:10:52.895567 kubelet[2755]: I0913 00:10:52.895511 2755 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:10:53.057078 kubelet[2755]: I0913 00:10:53.057011 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6fb491aa-6159-468a-adaa-a7eaa205158b-whisker-backend-key-pair\") pod \"whisker-7f8489888f-4s4v8\" (UID: \"6fb491aa-6159-468a-adaa-a7eaa205158b\") " pod="calico-system/whisker-7f8489888f-4s4v8" Sep 13 00:10:53.057078 kubelet[2755]: I0913 00:10:53.057059 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc-config-volume\") pod \"coredns-7c65d6cfc9-2m6jh\" (UID: \"ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc\") " pod="kube-system/coredns-7c65d6cfc9-2m6jh" Sep 13 00:10:53.057078 kubelet[2755]: I0913 00:10:53.057080 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/33f7161a-ca41-4c6b-95d5-d5f552f3a553-goldmane-key-pair\") pod \"goldmane-7988f88666-q6rw4\" (UID: \"33f7161a-ca41-4c6b-95d5-d5f552f3a553\") " pod="calico-system/goldmane-7988f88666-q6rw4" Sep 13 00:10:53.057078 kubelet[2755]: I0913 00:10:53.057097 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48c7b\" (UniqueName: \"kubernetes.io/projected/1b932375-cdcd-4a82-b528-b3a99b684eeb-kube-api-access-48c7b\") pod \"calico-apiserver-bcbcd6df9-cwbjt\" (UID: \"1b932375-cdcd-4a82-b528-b3a99b684eeb\") " pod="calico-apiserver/calico-apiserver-bcbcd6df9-cwbjt" Sep 13 00:10:53.058978 kubelet[2755]: I0913 00:10:53.057115 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnsct\" (UniqueName: \"kubernetes.io/projected/398f713a-e38d-4416-8b6a-bb19b2e75262-kube-api-access-cnsct\") pod \"coredns-7c65d6cfc9-9jlsq\" (UID: \"398f713a-e38d-4416-8b6a-bb19b2e75262\") " pod="kube-system/coredns-7c65d6cfc9-9jlsq" Sep 13 00:10:53.058978 kubelet[2755]: I0913 00:10:53.057131 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f7161a-ca41-4c6b-95d5-d5f552f3a553-config\") pod \"goldmane-7988f88666-q6rw4\" (UID: \"33f7161a-ca41-4c6b-95d5-d5f552f3a553\") " pod="calico-system/goldmane-7988f88666-q6rw4" Sep 13 00:10:53.058978 kubelet[2755]: I0913 00:10:53.057148 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w9fb\" (UniqueName: \"kubernetes.io/projected/e1724762-f3a5-4a7f-9c75-353a81a041e5-kube-api-access-7w9fb\") pod \"calico-apiserver-bcbcd6df9-pltxz\" (UID: \"e1724762-f3a5-4a7f-9c75-353a81a041e5\") " pod="calico-apiserver/calico-apiserver-bcbcd6df9-pltxz" Sep 13 00:10:53.058978 kubelet[2755]: I0913 00:10:53.057169 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtl4\" (UniqueName: \"kubernetes.io/projected/ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc-kube-api-access-kqtl4\") pod \"coredns-7c65d6cfc9-2m6jh\" (UID: \"ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc\") " pod="kube-system/coredns-7c65d6cfc9-2m6jh" Sep 13 00:10:53.058978 kubelet[2755]: I0913 00:10:53.057195 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1724762-f3a5-4a7f-9c75-353a81a041e5-calico-apiserver-certs\") pod \"calico-apiserver-bcbcd6df9-pltxz\" (UID: \"e1724762-f3a5-4a7f-9c75-353a81a041e5\") " pod="calico-apiserver/calico-apiserver-bcbcd6df9-pltxz" Sep 13 00:10:53.059427 kubelet[2755]: I0913 00:10:53.057216 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31e963cb-458c-437d-b290-9bfcbbbfa753-tigera-ca-bundle\") pod \"calico-kube-controllers-6dc566c86b-xkhhq\" (UID: \"31e963cb-458c-437d-b290-9bfcbbbfa753\") " pod="calico-system/calico-kube-controllers-6dc566c86b-xkhhq" Sep 13 00:10:53.059427 kubelet[2755]: I0913 00:10:53.057229 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vb7b\" (UniqueName: \"kubernetes.io/projected/6fb491aa-6159-468a-adaa-a7eaa205158b-kube-api-access-7vb7b\") pod \"whisker-7f8489888f-4s4v8\" (UID: \"6fb491aa-6159-468a-adaa-a7eaa205158b\") " pod="calico-system/whisker-7f8489888f-4s4v8" Sep 13 00:10:53.059427 kubelet[2755]: I0913 00:10:53.057244 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc241816-fb81-4b70-a8db-a4aa35a35261-calico-apiserver-certs\") pod \"calico-apiserver-748c7ccd65-nl8pm\" (UID: \"cc241816-fb81-4b70-a8db-a4aa35a35261\") " pod="calico-apiserver/calico-apiserver-748c7ccd65-nl8pm" Sep 13 00:10:53.059427 kubelet[2755]: I0913 00:10:53.057259 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmfv\" (UniqueName: \"kubernetes.io/projected/cc241816-fb81-4b70-a8db-a4aa35a35261-kube-api-access-jqmfv\") pod \"calico-apiserver-748c7ccd65-nl8pm\" (UID: \"cc241816-fb81-4b70-a8db-a4aa35a35261\") " pod="calico-apiserver/calico-apiserver-748c7ccd65-nl8pm" Sep 13 00:10:53.059427 kubelet[2755]: I0913 00:10:53.057357 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lpm\" (UniqueName: \"kubernetes.io/projected/31e963cb-458c-437d-b290-9bfcbbbfa753-kube-api-access-r6lpm\") pod \"calico-kube-controllers-6dc566c86b-xkhhq\" (UID: \"31e963cb-458c-437d-b290-9bfcbbbfa753\") " pod="calico-system/calico-kube-controllers-6dc566c86b-xkhhq" Sep 13 00:10:53.059534 kubelet[2755]: I0913 00:10:53.057418 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fb491aa-6159-468a-adaa-a7eaa205158b-whisker-ca-bundle\") pod \"whisker-7f8489888f-4s4v8\" (UID: \"6fb491aa-6159-468a-adaa-a7eaa205158b\") " pod="calico-system/whisker-7f8489888f-4s4v8" Sep 13 00:10:53.059534 kubelet[2755]: I0913 00:10:53.057441 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33f7161a-ca41-4c6b-95d5-d5f552f3a553-goldmane-ca-bundle\") pod \"goldmane-7988f88666-q6rw4\" (UID: \"33f7161a-ca41-4c6b-95d5-d5f552f3a553\") " pod="calico-system/goldmane-7988f88666-q6rw4" Sep 13 00:10:53.059534 kubelet[2755]: I0913 00:10:53.057635 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cqp\" (UniqueName: \"kubernetes.io/projected/33f7161a-ca41-4c6b-95d5-d5f552f3a553-kube-api-access-66cqp\") pod \"goldmane-7988f88666-q6rw4\" (UID: \"33f7161a-ca41-4c6b-95d5-d5f552f3a553\") " pod="calico-system/goldmane-7988f88666-q6rw4" Sep 13 00:10:53.059534 kubelet[2755]: I0913 00:10:53.057659 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b932375-cdcd-4a82-b528-b3a99b684eeb-calico-apiserver-certs\") pod \"calico-apiserver-bcbcd6df9-cwbjt\" (UID: \"1b932375-cdcd-4a82-b528-b3a99b684eeb\") " pod="calico-apiserver/calico-apiserver-bcbcd6df9-cwbjt" Sep 13 00:10:53.059534 kubelet[2755]: I0913 00:10:53.057683 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/398f713a-e38d-4416-8b6a-bb19b2e75262-config-volume\") pod \"coredns-7c65d6cfc9-9jlsq\" (UID: \"398f713a-e38d-4416-8b6a-bb19b2e75262\") " pod="kube-system/coredns-7c65d6cfc9-9jlsq" Sep 13 00:10:53.235443 containerd[1620]: time="2025-09-13T00:10:53.235294056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9jlsq,Uid:398f713a-e38d-4416-8b6a-bb19b2e75262,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:53.250751 containerd[1620]: time="2025-09-13T00:10:53.250686232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q6rw4,Uid:33f7161a-ca41-4c6b-95d5-d5f552f3a553,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:53.261766 containerd[1620]: time="2025-09-13T00:10:53.261552059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c7ccd65-nl8pm,Uid:cc241816-fb81-4b70-a8db-a4aa35a35261,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:10:53.264487 containerd[1620]: time="2025-09-13T00:10:53.264463589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcbcd6df9-pltxz,Uid:e1724762-f3a5-4a7f-9c75-353a81a041e5,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:10:53.267202 containerd[1620]: time="2025-09-13T00:10:53.267154144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcbcd6df9-cwbjt,Uid:1b932375-cdcd-4a82-b528-b3a99b684eeb,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:10:53.293831 containerd[1620]: time="2025-09-13T00:10:53.293283355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2m6jh,Uid:ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:53.294775 containerd[1620]: time="2025-09-13T00:10:53.294598212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8489888f-4s4v8,Uid:6fb491aa-6159-468a-adaa-a7eaa205158b,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:53.294775 containerd[1620]: time="2025-09-13T00:10:53.294596878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc566c86b-xkhhq,Uid:31e963cb-458c-437d-b290-9bfcbbbfa753,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:53.340347 containerd[1620]: time="2025-09-13T00:10:53.339547326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:10:53.578867 containerd[1620]: time="2025-09-13T00:10:53.578808353Z" level=error msg="Failed to destroy network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.583916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072-shm.mount: Deactivated successfully. Sep 13 00:10:53.588245 containerd[1620]: time="2025-09-13T00:10:53.588205575Z" level=error msg="encountered an error cleaning up failed sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.592390 containerd[1620]: time="2025-09-13T00:10:53.592358704Z" level=error msg="Failed to destroy network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.601487 containerd[1620]: time="2025-09-13T00:10:53.601440696Z" level=error msg="encountered an error cleaning up failed sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.601591 containerd[1620]: time="2025-09-13T00:10:53.601512891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2m6jh,Uid:ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.605250 containerd[1620]: time="2025-09-13T00:10:53.605210375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q6rw4,Uid:33f7161a-ca41-4c6b-95d5-d5f552f3a553,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.605845 containerd[1620]: time="2025-09-13T00:10:53.605810842Z" level=error msg="Failed to destroy network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.606397 containerd[1620]: time="2025-09-13T00:10:53.606164394Z" level=error msg="encountered an error cleaning up failed sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.606397 containerd[1620]: time="2025-09-13T00:10:53.606191927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9jlsq,Uid:398f713a-e38d-4416-8b6a-bb19b2e75262,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.606397 containerd[1620]: time="2025-09-13T00:10:53.606259103Z" level=error msg="Failed to destroy network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.606998 kubelet[2755]: E0913 00:10:53.606597 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.606998 kubelet[2755]: E0913 00:10:53.606612 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.606998 kubelet[2755]: E0913 00:10:53.606663 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-q6rw4" Sep 13 00:10:53.606998 kubelet[2755]: E0913 00:10:53.606675 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2m6jh" Sep 13 00:10:53.608755 kubelet[2755]: E0913 00:10:53.606683 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-q6rw4" Sep 13 00:10:53.608755 kubelet[2755]: E0913 00:10:53.606694 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2m6jh" Sep 13 00:10:53.608755 kubelet[2755]: E0913 00:10:53.606729 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-q6rw4_calico-system(33f7161a-ca41-4c6b-95d5-d5f552f3a553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-q6rw4_calico-system(33f7161a-ca41-4c6b-95d5-d5f552f3a553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-q6rw4" podUID="33f7161a-ca41-4c6b-95d5-d5f552f3a553" Sep 13 00:10:53.609385 kubelet[2755]: E0913 00:10:53.606734 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2m6jh_kube-system(ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2m6jh_kube-system(ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2m6jh" podUID="ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc" Sep 13 00:10:53.610694 kubelet[2755]: E0913 00:10:53.610654 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.610789 kubelet[2755]: E0913 00:10:53.610706 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9jlsq" Sep 13 00:10:53.610789 kubelet[2755]: E0913 00:10:53.610727 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9jlsq" Sep 13 00:10:53.611234 kubelet[2755]: E0913 00:10:53.610832 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9jlsq_kube-system(398f713a-e38d-4416-8b6a-bb19b2e75262)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9jlsq_kube-system(398f713a-e38d-4416-8b6a-bb19b2e75262)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9jlsq" podUID="398f713a-e38d-4416-8b6a-bb19b2e75262" Sep 13 00:10:53.611600 containerd[1620]: time="2025-09-13T00:10:53.611468361Z" level=error msg="encountered an error cleaning up failed sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.611713 containerd[1620]: time="2025-09-13T00:10:53.611690067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcbcd6df9-cwbjt,Uid:1b932375-cdcd-4a82-b528-b3a99b684eeb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.612209 kubelet[2755]: E0913 00:10:53.612036 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.612209 kubelet[2755]: E0913 00:10:53.612096 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bcbcd6df9-cwbjt" Sep 13 00:10:53.612209 kubelet[2755]: E0913 00:10:53.612113 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bcbcd6df9-cwbjt" Sep 13 00:10:53.612293 kubelet[2755]: E0913 00:10:53.612148 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bcbcd6df9-cwbjt_calico-apiserver(1b932375-cdcd-4a82-b528-b3a99b684eeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bcbcd6df9-cwbjt_calico-apiserver(1b932375-cdcd-4a82-b528-b3a99b684eeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bcbcd6df9-cwbjt" podUID="1b932375-cdcd-4a82-b528-b3a99b684eeb" Sep 13 00:10:53.613003 containerd[1620]: time="2025-09-13T00:10:53.612982621Z" level=error msg="Failed to destroy network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.613498 containerd[1620]: time="2025-09-13T00:10:53.613474453Z" level=error msg="encountered an error cleaning up failed sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.617103 containerd[1620]: time="2025-09-13T00:10:53.617076317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c7ccd65-nl8pm,Uid:cc241816-fb81-4b70-a8db-a4aa35a35261,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.617694 kubelet[2755]: E0913 00:10:53.617659 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.617753 kubelet[2755]: E0913 00:10:53.617700 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748c7ccd65-nl8pm" Sep 13 00:10:53.617753 kubelet[2755]: E0913 00:10:53.617725 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748c7ccd65-nl8pm" Sep 13 00:10:53.618158 kubelet[2755]: E0913 00:10:53.617760 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-748c7ccd65-nl8pm_calico-apiserver(cc241816-fb81-4b70-a8db-a4aa35a35261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-748c7ccd65-nl8pm_calico-apiserver(cc241816-fb81-4b70-a8db-a4aa35a35261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748c7ccd65-nl8pm" podUID="cc241816-fb81-4b70-a8db-a4aa35a35261" Sep 13 00:10:53.653432 containerd[1620]: time="2025-09-13T00:10:53.653378787Z" level=error msg="Failed to destroy network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.654390 containerd[1620]: time="2025-09-13T00:10:53.654358455Z" level=error msg="encountered an error cleaning up failed sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.654467 containerd[1620]: time="2025-09-13T00:10:53.654426573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8489888f-4s4v8,Uid:6fb491aa-6159-468a-adaa-a7eaa205158b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.655489 kubelet[2755]: E0913 00:10:53.655452 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.655577 kubelet[2755]: E0913 00:10:53.655531 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f8489888f-4s4v8" Sep 13 00:10:53.655577 kubelet[2755]: E0913 00:10:53.655551 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f8489888f-4s4v8" Sep 13 00:10:53.656553 kubelet[2755]: E0913 00:10:53.656507 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f8489888f-4s4v8_calico-system(6fb491aa-6159-468a-adaa-a7eaa205158b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f8489888f-4s4v8_calico-system(6fb491aa-6159-468a-adaa-a7eaa205158b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f8489888f-4s4v8" podUID="6fb491aa-6159-468a-adaa-a7eaa205158b" Sep 13 00:10:53.659424 containerd[1620]: time="2025-09-13T00:10:53.659394509Z" level=error msg="Failed to destroy network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.659985 containerd[1620]: time="2025-09-13T00:10:53.659960039Z" level=error msg="encountered an error cleaning up failed sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.660112 containerd[1620]: time="2025-09-13T00:10:53.660069023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcbcd6df9-pltxz,Uid:e1724762-f3a5-4a7f-9c75-353a81a041e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.660756 kubelet[2755]: E0913 00:10:53.660438 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.660756 kubelet[2755]: E0913 00:10:53.660490 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bcbcd6df9-pltxz" Sep 13 00:10:53.660756 kubelet[2755]: E0913 00:10:53.660507 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bcbcd6df9-pltxz" Sep 13 00:10:53.661426 kubelet[2755]: E0913 00:10:53.660567 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bcbcd6df9-pltxz_calico-apiserver(e1724762-f3a5-4a7f-9c75-353a81a041e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bcbcd6df9-pltxz_calico-apiserver(e1724762-f3a5-4a7f-9c75-353a81a041e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bcbcd6df9-pltxz" podUID="e1724762-f3a5-4a7f-9c75-353a81a041e5" Sep 13 00:10:53.663247 containerd[1620]: time="2025-09-13T00:10:53.663211657Z" level=error msg="Failed to destroy network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.663515 containerd[1620]: time="2025-09-13T00:10:53.663482877Z" level=error msg="encountered an error cleaning up failed sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.663590 containerd[1620]: time="2025-09-13T00:10:53.663524304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc566c86b-xkhhq,Uid:31e963cb-458c-437d-b290-9bfcbbbfa753,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.663678 kubelet[2755]: E0913 00:10:53.663658 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:53.663707 kubelet[2755]: E0913 00:10:53.663694 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc566c86b-xkhhq" Sep 13 00:10:53.663734 kubelet[2755]: E0913 00:10:53.663710 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc566c86b-xkhhq" Sep 13 00:10:53.663783 kubelet[2755]: E0913 00:10:53.663740 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dc566c86b-xkhhq_calico-system(31e963cb-458c-437d-b290-9bfcbbbfa753)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dc566c86b-xkhhq_calico-system(31e963cb-458c-437d-b290-9bfcbbbfa753)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc566c86b-xkhhq" podUID="31e963cb-458c-437d-b290-9bfcbbbfa753" Sep 13 00:10:54.129683 containerd[1620]: time="2025-09-13T00:10:54.129075777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rrtz,Uid:f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:54.187911 containerd[1620]: time="2025-09-13T00:10:54.187848884Z" level=error msg="Failed to destroy network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.188219 containerd[1620]: time="2025-09-13T00:10:54.188180025Z" level=error msg="encountered an error cleaning up failed sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.188295 containerd[1620]: time="2025-09-13T00:10:54.188232053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rrtz,Uid:f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.188735 kubelet[2755]: E0913 00:10:54.188499 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.188735 kubelet[2755]: E0913 00:10:54.188564 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rrtz" Sep 13 00:10:54.188735 kubelet[2755]: E0913 00:10:54.188603 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rrtz" Sep 13 00:10:54.188849 kubelet[2755]: E0913 00:10:54.188659 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rrtz_calico-system(f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rrtz_calico-system(f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rrtz" podUID="f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2" Sep 13 00:10:54.343361 kubelet[2755]: I0913 00:10:54.343280 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:10:54.345066 kubelet[2755]: I0913 00:10:54.345027 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:10:54.357856 kubelet[2755]: I0913 00:10:54.357534 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:10:54.359565 kubelet[2755]: I0913 00:10:54.359021 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:10:54.364365 kubelet[2755]: I0913 00:10:54.362377 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:10:54.367181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db-shm.mount: Deactivated successfully. Sep 13 00:10:54.368219 kubelet[2755]: I0913 00:10:54.368187 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:10:54.368215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059-shm.mount: Deactivated successfully. Sep 13 00:10:54.368560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad-shm.mount: Deactivated successfully. Sep 13 00:10:54.368749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0-shm.mount: Deactivated successfully. Sep 13 00:10:54.368923 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c-shm.mount: Deactivated successfully. Sep 13 00:10:54.369084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2-shm.mount: Deactivated successfully. Sep 13 00:10:54.369235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3-shm.mount: Deactivated successfully. Sep 13 00:10:54.375796 kubelet[2755]: I0913 00:10:54.375168 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:10:54.377478 kubelet[2755]: I0913 00:10:54.377451 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:10:54.379002 kubelet[2755]: I0913 00:10:54.378708 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:10:54.399709 containerd[1620]: time="2025-09-13T00:10:54.399616391Z" level=info msg="StopPodSandbox for \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\"" Sep 13 00:10:54.401343 containerd[1620]: time="2025-09-13T00:10:54.400983465Z" level=info msg="StopPodSandbox for \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\"" Sep 13 00:10:54.401343 containerd[1620]: time="2025-09-13T00:10:54.401241660Z" level=info msg="Ensure that sandbox 2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072 in task-service has been cleanup successfully" Sep 13 00:10:54.401542 containerd[1620]: time="2025-09-13T00:10:54.401462764Z" level=info msg="Ensure that sandbox fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad in task-service has been cleanup successfully" Sep 13 00:10:54.402220 containerd[1620]: time="2025-09-13T00:10:54.402102995Z" level=info msg="StopPodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\"" Sep 13 00:10:54.402387 containerd[1620]: time="2025-09-13T00:10:54.402369675Z" level=info msg="Ensure that sandbox 87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2 in task-service has been cleanup successfully" Sep 13 00:10:54.402876 containerd[1620]: time="2025-09-13T00:10:54.402597382Z" level=info msg="StopPodSandbox for \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\"" Sep 13 00:10:54.402876 containerd[1620]: time="2025-09-13T00:10:54.402719330Z" level=info msg="Ensure that sandbox 1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c in task-service has been cleanup successfully" Sep 13 00:10:54.403199 containerd[1620]: time="2025-09-13T00:10:54.403181617Z" level=info msg="StopPodSandbox for \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\"" Sep 13 00:10:54.403406 containerd[1620]: time="2025-09-13T00:10:54.403388545Z" level=info msg="Ensure that sandbox 30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0 in task-service has been cleanup successfully" Sep 13 00:10:54.405161 containerd[1620]: time="2025-09-13T00:10:54.404976443Z" level=info msg="StopPodSandbox for \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\"" Sep 13 00:10:54.405384 containerd[1620]: time="2025-09-13T00:10:54.405108490Z" level=info msg="StopPodSandbox for \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\"" Sep 13 00:10:54.405671 containerd[1620]: time="2025-09-13T00:10:54.405652662Z" level=info msg="StopPodSandbox for \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\"" Sep 13 00:10:54.406619 containerd[1620]: time="2025-09-13T00:10:54.405765784Z" level=info msg="Ensure that sandbox c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3 in task-service has been cleanup successfully" Sep 13 00:10:54.406883 containerd[1620]: time="2025-09-13T00:10:54.405270905Z" level=info msg="StopPodSandbox for \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\"" Sep 13 00:10:54.407083 containerd[1620]: time="2025-09-13T00:10:54.407036346Z" level=info msg="Ensure that sandbox d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db in task-service has been cleanup successfully" Sep 13 00:10:54.407707 containerd[1620]: time="2025-09-13T00:10:54.407434813Z" level=info msg="Ensure that sandbox c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059 in task-service has been cleanup successfully" Sep 13 00:10:54.409174 containerd[1620]: time="2025-09-13T00:10:54.405113440Z" level=info msg="Ensure that sandbox 515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7 in task-service has been cleanup successfully" Sep 13 00:10:54.490452 containerd[1620]: time="2025-09-13T00:10:54.490280985Z" level=error msg="StopPodSandbox for \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\" failed" error="failed to destroy network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.490930 kubelet[2755]: E0913 00:10:54.490827 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:10:54.501142 kubelet[2755]: E0913 00:10:54.490878 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c"} Sep 13 00:10:54.501225 kubelet[2755]: E0913 00:10:54.501169 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc241816-fb81-4b70-a8db-a4aa35a35261\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.501225 kubelet[2755]: E0913 00:10:54.501203 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc241816-fb81-4b70-a8db-a4aa35a35261\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748c7ccd65-nl8pm" podUID="cc241816-fb81-4b70-a8db-a4aa35a35261" Sep 13 00:10:54.507624 containerd[1620]: time="2025-09-13T00:10:54.505821369Z" level=error msg="StopPodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" failed" error="failed to destroy network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.507804 kubelet[2755]: E0913 00:10:54.507692 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:10:54.507804 kubelet[2755]: E0913 00:10:54.507765 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2"} Sep 13 00:10:54.507804 kubelet[2755]: E0913 00:10:54.507795 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b932375-cdcd-4a82-b528-b3a99b684eeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.508029 kubelet[2755]: E0913 00:10:54.507817 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b932375-cdcd-4a82-b528-b3a99b684eeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bcbcd6df9-cwbjt" podUID="1b932375-cdcd-4a82-b528-b3a99b684eeb" Sep 13 00:10:54.509793 containerd[1620]: time="2025-09-13T00:10:54.509770053Z" level=error msg="StopPodSandbox for \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\" failed" error="failed to destroy network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.510462 kubelet[2755]: E0913 00:10:54.510374 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:10:54.510462 kubelet[2755]: E0913 00:10:54.510424 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0"} Sep 13 00:10:54.510462 kubelet[2755]: E0913 00:10:54.510450 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.510583 kubelet[2755]: E0913 00:10:54.510468 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2m6jh" podUID="ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc" Sep 13 00:10:54.515632 containerd[1620]: time="2025-09-13T00:10:54.515421893Z" level=error msg="StopPodSandbox for \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\" failed" error="failed to destroy network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.515632 containerd[1620]: time="2025-09-13T00:10:54.515504287Z" level=error msg="StopPodSandbox for \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\" failed" error="failed to destroy network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.515632 containerd[1620]: time="2025-09-13T00:10:54.515547508Z" level=error msg="StopPodSandbox for \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\" failed" error="failed to destroy network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.515632 containerd[1620]: time="2025-09-13T00:10:54.515584638Z" level=error msg="StopPodSandbox for \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\" failed" error="failed to destroy network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.515845 containerd[1620]: time="2025-09-13T00:10:54.515806874Z" level=error msg="StopPodSandbox for \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\" failed" error="failed to destroy network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.516000 kubelet[2755]: E0913 00:10:54.515968 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:10:54.516036 kubelet[2755]: E0913 00:10:54.516003 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db"} Sep 13 00:10:54.516036 kubelet[2755]: E0913 00:10:54.516028 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31e963cb-458c-437d-b290-9bfcbbbfa753\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.516101 kubelet[2755]: E0913 00:10:54.516045 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31e963cb-458c-437d-b290-9bfcbbbfa753\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc566c86b-xkhhq" podUID="31e963cb-458c-437d-b290-9bfcbbbfa753" Sep 13 00:10:54.516101 kubelet[2755]: E0913 00:10:54.516067 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:10:54.516101 kubelet[2755]: E0913 00:10:54.516079 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3"} Sep 13 00:10:54.516101 kubelet[2755]: E0913 00:10:54.516093 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"398f713a-e38d-4416-8b6a-bb19b2e75262\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.516287 kubelet[2755]: E0913 00:10:54.516106 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"398f713a-e38d-4416-8b6a-bb19b2e75262\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9jlsq" podUID="398f713a-e38d-4416-8b6a-bb19b2e75262" Sep 13 00:10:54.516287 kubelet[2755]: E0913 00:10:54.516127 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:10:54.516287 kubelet[2755]: E0913 00:10:54.516137 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059"} Sep 13 00:10:54.516287 kubelet[2755]: E0913 00:10:54.516150 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6fb491aa-6159-468a-adaa-a7eaa205158b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.516508 kubelet[2755]: E0913 00:10:54.516184 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6fb491aa-6159-468a-adaa-a7eaa205158b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f8489888f-4s4v8" podUID="6fb491aa-6159-468a-adaa-a7eaa205158b" Sep 13 00:10:54.516508 kubelet[2755]: E0913 00:10:54.516213 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:10:54.516508 kubelet[2755]: E0913 00:10:54.516225 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad"} Sep 13 00:10:54.516508 kubelet[2755]: E0913 00:10:54.516238 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1724762-f3a5-4a7f-9c75-353a81a041e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.516616 kubelet[2755]: E0913 00:10:54.516251 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1724762-f3a5-4a7f-9c75-353a81a041e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bcbcd6df9-pltxz" podUID="e1724762-f3a5-4a7f-9c75-353a81a041e5" Sep 13 00:10:54.517058 kubelet[2755]: E0913 00:10:54.517036 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:10:54.517094 kubelet[2755]: E0913 00:10:54.517060 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072"} Sep 13 00:10:54.517114 kubelet[2755]: E0913 00:10:54.517097 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33f7161a-ca41-4c6b-95d5-d5f552f3a553\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.517197 kubelet[2755]: E0913 00:10:54.517112 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33f7161a-ca41-4c6b-95d5-d5f552f3a553\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-q6rw4" podUID="33f7161a-ca41-4c6b-95d5-d5f552f3a553" Sep 13 00:10:54.517335 containerd[1620]: time="2025-09-13T00:10:54.517279706Z" level=error msg="StopPodSandbox for \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\" failed" error="failed to destroy network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:10:54.517499 kubelet[2755]: E0913 00:10:54.517473 2755 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:10:54.517533 kubelet[2755]: E0913 00:10:54.517500 2755 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7"} Sep 13 00:10:54.517533 kubelet[2755]: E0913 00:10:54.517518 2755 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:10:54.517613 kubelet[2755]: E0913 00:10:54.517534 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rrtz" podUID="f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2" Sep 13 00:10:59.392611 kubelet[2755]: I0913 00:10:59.392198 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:01.383724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740544669.mount: Deactivated successfully. Sep 13 00:11:01.463694 containerd[1620]: time="2025-09-13T00:11:01.447990418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:11:01.465714 containerd[1620]: time="2025-09-13T00:11:01.465643479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:01.475365 containerd[1620]: time="2025-09-13T00:11:01.474371055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.134771781s" Sep 13 00:11:01.475365 containerd[1620]: time="2025-09-13T00:11:01.474422051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:11:01.490616 containerd[1620]: time="2025-09-13T00:11:01.490519824Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:01.499161 containerd[1620]: time="2025-09-13T00:11:01.498971011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:01.544348 containerd[1620]: time="2025-09-13T00:11:01.544273718Z" level=info msg="CreateContainer within sandbox \"e6f75dfb9d103e6421d477dbe944ed844620b9328e5203de3010f2d3c389ef8c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:11:01.606363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781788087.mount: Deactivated successfully. Sep 13 00:11:01.615324 containerd[1620]: time="2025-09-13T00:11:01.615267356Z" level=info msg="CreateContainer within sandbox \"e6f75dfb9d103e6421d477dbe944ed844620b9328e5203de3010f2d3c389ef8c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7bf40bee7b6bfbc7575e9d039a7914541a59826838e2e70e0f3dbad4e2bf4bb6\"" Sep 13 00:11:01.618483 containerd[1620]: time="2025-09-13T00:11:01.618166732Z" level=info msg="StartContainer for \"7bf40bee7b6bfbc7575e9d039a7914541a59826838e2e70e0f3dbad4e2bf4bb6\"" Sep 13 00:11:01.817374 containerd[1620]: time="2025-09-13T00:11:01.817295044Z" level=info msg="StartContainer for \"7bf40bee7b6bfbc7575e9d039a7914541a59826838e2e70e0f3dbad4e2bf4bb6\" returns successfully" Sep 13 00:11:01.896822 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:11:01.899127 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:11:02.057769 containerd[1620]: time="2025-09-13T00:11:02.056130870Z" level=info msg="StopPodSandbox for \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\"" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.153 [INFO][4020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.154 [INFO][4020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" iface="eth0" netns="/var/run/netns/cni-92efd06a-09f1-8592-9660-961441c12076" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.155 [INFO][4020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" iface="eth0" netns="/var/run/netns/cni-92efd06a-09f1-8592-9660-961441c12076" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.156 [INFO][4020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" iface="eth0" netns="/var/run/netns/cni-92efd06a-09f1-8592-9660-961441c12076" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.156 [INFO][4020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.156 [INFO][4020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.327 [INFO][4027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.329 [INFO][4027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.329 [INFO][4027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.341 [WARNING][4027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.341 [INFO][4027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.343 [INFO][4027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:02.348087 containerd[1620]: 2025-09-13 00:11:02.345 [INFO][4020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:02.348912 containerd[1620]: time="2025-09-13T00:11:02.348236034Z" level=info msg="TearDown network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\" successfully" Sep 13 00:11:02.348912 containerd[1620]: time="2025-09-13T00:11:02.348262152Z" level=info msg="StopPodSandbox for \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\" returns successfully" Sep 13 00:11:02.384065 systemd[1]: run-netns-cni\x2d92efd06a\x2d09f1\x2d8592\x2d9660\x2d961441c12076.mount: Deactivated successfully. Sep 13 00:11:02.442182 kubelet[2755]: I0913 00:11:02.442034 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6fb491aa-6159-468a-adaa-a7eaa205158b-whisker-backend-key-pair\") pod \"6fb491aa-6159-468a-adaa-a7eaa205158b\" (UID: \"6fb491aa-6159-468a-adaa-a7eaa205158b\") " Sep 13 00:11:02.466654 kubelet[2755]: I0913 00:11:02.466032 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fb491aa-6159-468a-adaa-a7eaa205158b-whisker-ca-bundle\") pod \"6fb491aa-6159-468a-adaa-a7eaa205158b\" (UID: \"6fb491aa-6159-468a-adaa-a7eaa205158b\") " Sep 13 00:11:02.466938 kubelet[2755]: I0913 00:11:02.466896 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vb7b\" (UniqueName: \"kubernetes.io/projected/6fb491aa-6159-468a-adaa-a7eaa205158b-kube-api-access-7vb7b\") pod \"6fb491aa-6159-468a-adaa-a7eaa205158b\" (UID: \"6fb491aa-6159-468a-adaa-a7eaa205158b\") " Sep 13 00:11:02.484238 systemd[1]: var-lib-kubelet-pods-6fb491aa\x2d6159\x2d468a\x2dadaa\x2da7eaa205158b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:11:02.488450 kubelet[2755]: I0913 00:11:02.486646 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb491aa-6159-468a-adaa-a7eaa205158b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6fb491aa-6159-468a-adaa-a7eaa205158b" (UID: "6fb491aa-6159-468a-adaa-a7eaa205158b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:11:02.488450 kubelet[2755]: I0913 00:11:02.486995 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb491aa-6159-468a-adaa-a7eaa205158b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6fb491aa-6159-468a-adaa-a7eaa205158b" (UID: "6fb491aa-6159-468a-adaa-a7eaa205158b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:11:02.492535 kubelet[2755]: I0913 00:11:02.492500 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb491aa-6159-468a-adaa-a7eaa205158b-kube-api-access-7vb7b" (OuterVolumeSpecName: "kube-api-access-7vb7b") pod "6fb491aa-6159-468a-adaa-a7eaa205158b" (UID: "6fb491aa-6159-468a-adaa-a7eaa205158b"). InnerVolumeSpecName "kube-api-access-7vb7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:11:02.492591 systemd[1]: var-lib-kubelet-pods-6fb491aa\x2d6159\x2d468a\x2dadaa\x2da7eaa205158b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7vb7b.mount: Deactivated successfully. Sep 13 00:11:02.568196 kubelet[2755]: I0913 00:11:02.568137 2755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vb7b\" (UniqueName: \"kubernetes.io/projected/6fb491aa-6159-468a-adaa-a7eaa205158b-kube-api-access-7vb7b\") on node \"ci-4081-3-5-n-662926fb9e\" DevicePath \"\"" Sep 13 00:11:02.568196 kubelet[2755]: I0913 00:11:02.568177 2755 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6fb491aa-6159-468a-adaa-a7eaa205158b-whisker-backend-key-pair\") on node \"ci-4081-3-5-n-662926fb9e\" DevicePath \"\"" Sep 13 00:11:02.568196 kubelet[2755]: I0913 00:11:02.568189 2755 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fb491aa-6159-468a-adaa-a7eaa205158b-whisker-ca-bundle\") on node \"ci-4081-3-5-n-662926fb9e\" DevicePath \"\"" Sep 13 00:11:02.734691 kubelet[2755]: I0913 00:11:02.724476 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z5zd6" podStartSLOduration=1.895405547 podStartE2EDuration="20.714327486s" podCreationTimestamp="2025-09-13 00:10:42 +0000 UTC" firstStartedPulling="2025-09-13 00:10:42.67222959 +0000 UTC m=+17.664338906" lastFinishedPulling="2025-09-13 00:11:01.491151489 +0000 UTC m=+36.483260845" observedRunningTime="2025-09-13 00:11:02.467817364 +0000 UTC m=+37.459926761" watchObservedRunningTime="2025-09-13 00:11:02.714327486 +0000 UTC m=+37.706436802" Sep 13 00:11:02.871692 kubelet[2755]: I0913 00:11:02.871625 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dj7\" (UniqueName: \"kubernetes.io/projected/96e73c55-88cf-42c0-968d-34ae568c3919-kube-api-access-m2dj7\") pod \"whisker-5b7665d6d8-8bbnp\" (UID: \"96e73c55-88cf-42c0-968d-34ae568c3919\") " pod="calico-system/whisker-5b7665d6d8-8bbnp" Sep 13 00:11:02.871692 kubelet[2755]: I0913 00:11:02.871685 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e73c55-88cf-42c0-968d-34ae568c3919-whisker-ca-bundle\") pod \"whisker-5b7665d6d8-8bbnp\" (UID: \"96e73c55-88cf-42c0-968d-34ae568c3919\") " pod="calico-system/whisker-5b7665d6d8-8bbnp" Sep 13 00:11:02.871863 kubelet[2755]: I0913 00:11:02.871712 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/96e73c55-88cf-42c0-968d-34ae568c3919-whisker-backend-key-pair\") pod \"whisker-5b7665d6d8-8bbnp\" (UID: \"96e73c55-88cf-42c0-968d-34ae568c3919\") " pod="calico-system/whisker-5b7665d6d8-8bbnp" Sep 13 00:11:02.895457 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:11:02.900153 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:11:02.895542 systemd-resolved[1510]: Flushed all caches. Sep 13 00:11:03.113517 containerd[1620]: time="2025-09-13T00:11:03.113454608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b7665d6d8-8bbnp,Uid:96e73c55-88cf-42c0-968d-34ae568c3919,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:03.144168 kubelet[2755]: I0913 00:11:03.144137 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb491aa-6159-468a-adaa-a7eaa205158b" path="/var/lib/kubelet/pods/6fb491aa-6159-468a-adaa-a7eaa205158b/volumes" Sep 13 00:11:03.286505 systemd-networkd[1257]: califd2e18ebff8: Link UP Sep 13 00:11:03.291413 systemd-networkd[1257]: califd2e18ebff8: Gained carrier Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.167 [INFO][4052] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.181 [INFO][4052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0 whisker-5b7665d6d8- calico-system 96e73c55-88cf-42c0-968d-34ae568c3919 930 0 2025-09-13 00:11:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b7665d6d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e whisker-5b7665d6d8-8bbnp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califd2e18ebff8 [] [] }} ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Namespace="calico-system" Pod="whisker-5b7665d6d8-8bbnp" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.181 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Namespace="calico-system" Pod="whisker-5b7665d6d8-8bbnp" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.207 [INFO][4061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" HandleID="k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.208 [INFO][4061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" HandleID="k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-662926fb9e", "pod":"whisker-5b7665d6d8-8bbnp", "timestamp":"2025-09-13 00:11:03.207710401 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.208 [INFO][4061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.208 [INFO][4061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.208 [INFO][4061] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.216 [INFO][4061] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.226 [INFO][4061] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.231 [INFO][4061] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.233 [INFO][4061] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.238 [INFO][4061] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.238 [INFO][4061] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.241 [INFO][4061] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.247 [INFO][4061] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.252 [INFO][4061] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.193/26] block=192.168.28.192/26 handle="k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.252 [INFO][4061] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.193/26] handle="k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.253 [INFO][4061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:03.306808 containerd[1620]: 2025-09-13 00:11:03.253 [INFO][4061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.193/26] IPv6=[] ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" HandleID="k8s-pod-network.eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" Sep 13 00:11:03.313118 containerd[1620]: 2025-09-13 00:11:03.259 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Namespace="calico-system" Pod="whisker-5b7665d6d8-8bbnp" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0", GenerateName:"whisker-5b7665d6d8-", Namespace:"calico-system", SelfLink:"", UID:"96e73c55-88cf-42c0-968d-34ae568c3919", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b7665d6d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"whisker-5b7665d6d8-8bbnp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califd2e18ebff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:03.313118 containerd[1620]: 2025-09-13 00:11:03.259 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.193/32] ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Namespace="calico-system" Pod="whisker-5b7665d6d8-8bbnp" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" Sep 13 00:11:03.313118 containerd[1620]: 2025-09-13 00:11:03.259 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd2e18ebff8 ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Namespace="calico-system" Pod="whisker-5b7665d6d8-8bbnp" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" Sep 13 00:11:03.313118 containerd[1620]: 2025-09-13 00:11:03.280 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Namespace="calico-system" Pod="whisker-5b7665d6d8-8bbnp" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" Sep 13 00:11:03.313118 containerd[1620]: 2025-09-13 00:11:03.283 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Namespace="calico-system" Pod="whisker-5b7665d6d8-8bbnp" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0", GenerateName:"whisker-5b7665d6d8-", Namespace:"calico-system", SelfLink:"", UID:"96e73c55-88cf-42c0-968d-34ae568c3919", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b7665d6d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f", Pod:"whisker-5b7665d6d8-8bbnp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califd2e18ebff8", MAC:"ee:aa:93:b4:9e:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:03.313118 containerd[1620]: 2025-09-13 00:11:03.298 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f" Namespace="calico-system" Pod="whisker-5b7665d6d8-8bbnp" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--5b7665d6d8--8bbnp-eth0" Sep 13 00:11:03.423302 kubelet[2755]: I0913 00:11:03.423080 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:03.440032 containerd[1620]: time="2025-09-13T00:11:03.436371058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:03.440032 containerd[1620]: time="2025-09-13T00:11:03.436435589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:03.440032 containerd[1620]: time="2025-09-13T00:11:03.436448553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:03.440032 containerd[1620]: time="2025-09-13T00:11:03.436521480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:03.558171 containerd[1620]: time="2025-09-13T00:11:03.558067733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b7665d6d8-8bbnp,Uid:96e73c55-88cf-42c0-968d-34ae568c3919,Namespace:calico-system,Attempt:0,} returns sandbox id \"eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f\"" Sep 13 00:11:03.570578 containerd[1620]: time="2025-09-13T00:11:03.570539508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:11:03.706473 kernel: bpftool[4236]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:11:03.924926 systemd-networkd[1257]: vxlan.calico: Link UP Sep 13 00:11:03.924935 systemd-networkd[1257]: vxlan.calico: Gained carrier Sep 13 00:11:04.495556 systemd-networkd[1257]: califd2e18ebff8: Gained IPv6LL Sep 13 00:11:04.680524 kubelet[2755]: I0913 00:11:04.680470 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:05.129453 containerd[1620]: time="2025-09-13T00:11:05.129411328Z" level=info msg="StopPodSandbox for \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\"" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.176 [INFO][4362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.177 [INFO][4362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" iface="eth0" netns="/var/run/netns/cni-0bdbfb30-c70e-182a-e2d7-ae9fdf8722d2" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.177 [INFO][4362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" iface="eth0" netns="/var/run/netns/cni-0bdbfb30-c70e-182a-e2d7-ae9fdf8722d2" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.178 [INFO][4362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" iface="eth0" netns="/var/run/netns/cni-0bdbfb30-c70e-182a-e2d7-ae9fdf8722d2" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.178 [INFO][4362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.178 [INFO][4362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.198 [INFO][4369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.198 [INFO][4369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.198 [INFO][4369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.205 [WARNING][4369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.205 [INFO][4369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.207 [INFO][4369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:05.216297 containerd[1620]: 2025-09-13 00:11:05.213 [INFO][4362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:05.217625 containerd[1620]: time="2025-09-13T00:11:05.216475443Z" level=info msg="TearDown network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\" successfully" Sep 13 00:11:05.217625 containerd[1620]: time="2025-09-13T00:11:05.216500440Z" level=info msg="StopPodSandbox for \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\" returns successfully" Sep 13 00:11:05.217625 containerd[1620]: time="2025-09-13T00:11:05.216997763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc566c86b-xkhhq,Uid:31e963cb-458c-437d-b290-9bfcbbbfa753,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:05.220563 systemd[1]: run-netns-cni\x2d0bdbfb30\x2dc70e\x2d182a\x2de2d7\x2dae9fdf8722d2.mount: Deactivated successfully. Sep 13 00:11:05.319403 systemd-networkd[1257]: cali51906b91a83: Link UP Sep 13 00:11:05.319614 systemd-networkd[1257]: cali51906b91a83: Gained carrier Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.257 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0 calico-kube-controllers-6dc566c86b- calico-system 31e963cb-458c-437d-b290-9bfcbbbfa753 944 0 2025-09-13 00:10:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dc566c86b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e calico-kube-controllers-6dc566c86b-xkhhq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali51906b91a83 [] [] }} ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Namespace="calico-system" Pod="calico-kube-controllers-6dc566c86b-xkhhq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.257 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Namespace="calico-system" Pod="calico-kube-controllers-6dc566c86b-xkhhq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.276 [INFO][4388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" HandleID="k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.276 [INFO][4388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" HandleID="k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-662926fb9e", "pod":"calico-kube-controllers-6dc566c86b-xkhhq", "timestamp":"2025-09-13 00:11:05.276502523 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.276 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.276 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.276 [INFO][4388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.283 [INFO][4388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.288 [INFO][4388] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.292 [INFO][4388] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.294 [INFO][4388] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.296 [INFO][4388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.296 [INFO][4388] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.297 [INFO][4388] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.302 [INFO][4388] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.307 [INFO][4388] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.194/26] block=192.168.28.192/26 handle="k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.307 [INFO][4388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.194/26] handle="k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.307 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:05.334726 containerd[1620]: 2025-09-13 00:11:05.307 [INFO][4388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.194/26] IPv6=[] ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" HandleID="k8s-pod-network.3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.336554 containerd[1620]: 2025-09-13 00:11:05.312 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Namespace="calico-system" Pod="calico-kube-controllers-6dc566c86b-xkhhq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0", GenerateName:"calico-kube-controllers-6dc566c86b-", Namespace:"calico-system", SelfLink:"", UID:"31e963cb-458c-437d-b290-9bfcbbbfa753", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc566c86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"calico-kube-controllers-6dc566c86b-xkhhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51906b91a83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:05.336554 containerd[1620]: 2025-09-13 00:11:05.312 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.194/32] ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Namespace="calico-system" Pod="calico-kube-controllers-6dc566c86b-xkhhq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.336554 containerd[1620]: 2025-09-13 00:11:05.312 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51906b91a83 ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Namespace="calico-system" Pod="calico-kube-controllers-6dc566c86b-xkhhq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.336554 containerd[1620]: 2025-09-13 00:11:05.319 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Namespace="calico-system" Pod="calico-kube-controllers-6dc566c86b-xkhhq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.336554 containerd[1620]: 2025-09-13 00:11:05.320 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Namespace="calico-system" Pod="calico-kube-controllers-6dc566c86b-xkhhq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0", GenerateName:"calico-kube-controllers-6dc566c86b-", Namespace:"calico-system", SelfLink:"", UID:"31e963cb-458c-437d-b290-9bfcbbbfa753", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc566c86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d", Pod:"calico-kube-controllers-6dc566c86b-xkhhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51906b91a83", MAC:"36:d2:9b:61:49:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:05.336554 containerd[1620]: 2025-09-13 00:11:05.329 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d" Namespace="calico-system" Pod="calico-kube-controllers-6dc566c86b-xkhhq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:05.367508 containerd[1620]: time="2025-09-13T00:11:05.367437726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:05.367508 containerd[1620]: time="2025-09-13T00:11:05.367485025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:05.367847 containerd[1620]: time="2025-09-13T00:11:05.367494492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:05.367847 containerd[1620]: time="2025-09-13T00:11:05.367806327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:05.417490 containerd[1620]: time="2025-09-13T00:11:05.417420153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc566c86b-xkhhq,Uid:31e963cb-458c-437d-b290-9bfcbbbfa753,Namespace:calico-system,Attempt:1,} returns sandbox id \"3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d\"" Sep 13 00:11:05.485005 containerd[1620]: time="2025-09-13T00:11:05.484970188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:05.486395 containerd[1620]: time="2025-09-13T00:11:05.486338575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:11:05.487335 containerd[1620]: time="2025-09-13T00:11:05.487032616Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:05.488779 containerd[1620]: time="2025-09-13T00:11:05.488757721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:05.489204 containerd[1620]: time="2025-09-13T00:11:05.489180834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.918606111s" Sep 13 00:11:05.489254 containerd[1620]: time="2025-09-13T00:11:05.489207694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:11:05.490922 containerd[1620]: time="2025-09-13T00:11:05.490907141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:11:05.492207 containerd[1620]: time="2025-09-13T00:11:05.492180759Z" level=info msg="CreateContainer within sandbox \"eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:11:05.499208 containerd[1620]: time="2025-09-13T00:11:05.499174162Z" level=info msg="CreateContainer within sandbox \"eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1dcb5c416be4e8f336a5a6d67a823c6f7cea48353faa274a21409107debfd178\"" Sep 13 00:11:05.499702 containerd[1620]: time="2025-09-13T00:11:05.499663649Z" level=info msg="StartContainer for \"1dcb5c416be4e8f336a5a6d67a823c6f7cea48353faa274a21409107debfd178\"" Sep 13 00:11:05.552881 containerd[1620]: time="2025-09-13T00:11:05.552819757Z" level=info msg="StartContainer for \"1dcb5c416be4e8f336a5a6d67a823c6f7cea48353faa274a21409107debfd178\" returns successfully" Sep 13 00:11:05.903849 systemd-networkd[1257]: vxlan.calico: Gained IPv6LL Sep 13 00:11:06.127833 containerd[1620]: time="2025-09-13T00:11:06.127720369Z" level=info msg="StopPodSandbox for \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\"" Sep 13 00:11:06.128781 containerd[1620]: time="2025-09-13T00:11:06.128002338Z" level=info msg="StopPodSandbox for \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\"" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.186 [INFO][4505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.187 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" iface="eth0" netns="/var/run/netns/cni-e04cd4d4-b9ee-b65a-12c7-03bb5c104bc1" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.187 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" iface="eth0" netns="/var/run/netns/cni-e04cd4d4-b9ee-b65a-12c7-03bb5c104bc1" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.187 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" iface="eth0" netns="/var/run/netns/cni-e04cd4d4-b9ee-b65a-12c7-03bb5c104bc1" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.187 [INFO][4505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.187 [INFO][4505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.218 [INFO][4517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.218 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.218 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.223 [WARNING][4517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.223 [INFO][4517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.225 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:06.228234 containerd[1620]: 2025-09-13 00:11:06.226 [INFO][4505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:06.230121 containerd[1620]: time="2025-09-13T00:11:06.228523843Z" level=info msg="TearDown network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\" successfully" Sep 13 00:11:06.230121 containerd[1620]: time="2025-09-13T00:11:06.229374747Z" level=info msg="StopPodSandbox for \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\" returns successfully" Sep 13 00:11:06.235327 containerd[1620]: time="2025-09-13T00:11:06.233143504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2m6jh,Uid:ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc,Namespace:kube-system,Attempt:1,}" Sep 13 00:11:06.233777 systemd[1]: run-netns-cni\x2de04cd4d4\x2db9ee\x2db65a\x2d12c7\x2d03bb5c104bc1.mount: Deactivated successfully. Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.200 [INFO][4504] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.200 [INFO][4504] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" iface="eth0" netns="/var/run/netns/cni-ec62f0c8-a257-da07-f388-721b8ade7aa9" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.200 [INFO][4504] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" iface="eth0" netns="/var/run/netns/cni-ec62f0c8-a257-da07-f388-721b8ade7aa9" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.200 [INFO][4504] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" iface="eth0" netns="/var/run/netns/cni-ec62f0c8-a257-da07-f388-721b8ade7aa9" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.200 [INFO][4504] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.201 [INFO][4504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.245 [INFO][4522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.245 [INFO][4522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.246 [INFO][4522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.252 [WARNING][4522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.253 [INFO][4522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.258 [INFO][4522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:06.264292 containerd[1620]: 2025-09-13 00:11:06.261 [INFO][4504] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:06.267392 containerd[1620]: time="2025-09-13T00:11:06.264477254Z" level=info msg="TearDown network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\" successfully" Sep 13 00:11:06.267392 containerd[1620]: time="2025-09-13T00:11:06.264527589Z" level=info msg="StopPodSandbox for \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\" returns successfully" Sep 13 00:11:06.267023 systemd[1]: run-netns-cni\x2dec62f0c8\x2da257\x2dda07\x2df388\x2d721b8ade7aa9.mount: Deactivated successfully. Sep 13 00:11:06.267696 containerd[1620]: time="2025-09-13T00:11:06.267677204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c7ccd65-nl8pm,Uid:cc241816-fb81-4b70-a8db-a4aa35a35261,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:11:06.369944 systemd-networkd[1257]: cali75f4e169236: Link UP Sep 13 00:11:06.370115 systemd-networkd[1257]: cali75f4e169236: Gained carrier Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.299 [INFO][4531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0 coredns-7c65d6cfc9- kube-system ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc 956 0 2025-09-13 00:10:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e coredns-7c65d6cfc9-2m6jh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali75f4e169236 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2m6jh" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.300 [INFO][4531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2m6jh" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.330 [INFO][4554] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" HandleID="k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.330 [INFO][4554] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" HandleID="k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd740), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-662926fb9e", "pod":"coredns-7c65d6cfc9-2m6jh", "timestamp":"2025-09-13 00:11:06.330800578 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.330 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.331 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.331 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.336 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.340 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.344 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.345 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.347 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.347 [INFO][4554] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.348 [INFO][4554] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719 Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.352 [INFO][4554] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.362 [INFO][4554] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.195/26] block=192.168.28.192/26 handle="k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.362 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.195/26] handle="k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.362 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:06.386103 containerd[1620]: 2025-09-13 00:11:06.362 [INFO][4554] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.195/26] IPv6=[] ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" HandleID="k8s-pod-network.546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.387272 containerd[1620]: 2025-09-13 00:11:06.365 [INFO][4531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2m6jh" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"coredns-7c65d6cfc9-2m6jh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75f4e169236", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:06.387272 containerd[1620]: 2025-09-13 00:11:06.365 [INFO][4531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.195/32] ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2m6jh" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.387272 containerd[1620]: 2025-09-13 00:11:06.365 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75f4e169236 ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2m6jh" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.387272 containerd[1620]: 2025-09-13 00:11:06.369 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2m6jh" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.387272 containerd[1620]: 2025-09-13 00:11:06.370 [INFO][4531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2m6jh" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719", Pod:"coredns-7c65d6cfc9-2m6jh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75f4e169236", MAC:"f2:ff:9e:ff:50:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:06.387272 containerd[1620]: 2025-09-13 00:11:06.381 [INFO][4531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2m6jh" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:06.403132 containerd[1620]: time="2025-09-13T00:11:06.402893811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:06.403132 containerd[1620]: time="2025-09-13T00:11:06.403000993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:06.403132 containerd[1620]: time="2025-09-13T00:11:06.403015450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:06.403132 containerd[1620]: time="2025-09-13T00:11:06.403092514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:06.461159 containerd[1620]: time="2025-09-13T00:11:06.461042980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2m6jh,Uid:ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc,Namespace:kube-system,Attempt:1,} returns sandbox id \"546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719\"" Sep 13 00:11:06.465252 containerd[1620]: time="2025-09-13T00:11:06.465216595Z" level=info msg="CreateContainer within sandbox \"546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:11:06.479553 systemd-networkd[1257]: cali51906b91a83: Gained IPv6LL Sep 13 00:11:06.494895 systemd-networkd[1257]: cali16608ba41f7: Link UP Sep 13 00:11:06.495339 systemd-networkd[1257]: cali16608ba41f7: Gained carrier Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.315 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0 calico-apiserver-748c7ccd65- calico-apiserver cc241816-fb81-4b70-a8db-a4aa35a35261 958 0 2025-09-13 00:10:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748c7ccd65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e calico-apiserver-748c7ccd65-nl8pm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali16608ba41f7 [] [] }} ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nl8pm" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.315 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nl8pm" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.341 [INFO][4559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" HandleID="k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.341 [INFO][4559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" HandleID="k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-662926fb9e", "pod":"calico-apiserver-748c7ccd65-nl8pm", "timestamp":"2025-09-13 00:11:06.34134018 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.341 [INFO][4559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.362 [INFO][4559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.362 [INFO][4559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.438 [INFO][4559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.447 [INFO][4559] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.454 [INFO][4559] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.456 [INFO][4559] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.459 [INFO][4559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.459 [INFO][4559] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.460 [INFO][4559] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728 Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.467 [INFO][4559] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.474 [INFO][4559] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.196/26] block=192.168.28.192/26 handle="k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.474 [INFO][4559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.196/26] handle="k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.474 [INFO][4559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:06.505857 containerd[1620]: 2025-09-13 00:11:06.474 [INFO][4559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.196/26] IPv6=[] ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" HandleID="k8s-pod-network.a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.508861 containerd[1620]: 2025-09-13 00:11:06.481 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nl8pm" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0", GenerateName:"calico-apiserver-748c7ccd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc241816-fb81-4b70-a8db-a4aa35a35261", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c7ccd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"calico-apiserver-748c7ccd65-nl8pm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16608ba41f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:06.508861 containerd[1620]: 2025-09-13 00:11:06.482 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.196/32] ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nl8pm" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.508861 containerd[1620]: 2025-09-13 00:11:06.483 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16608ba41f7 ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nl8pm" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.508861 containerd[1620]: 2025-09-13 00:11:06.486 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nl8pm" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.508861 containerd[1620]: 2025-09-13 00:11:06.487 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nl8pm" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0", GenerateName:"calico-apiserver-748c7ccd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc241816-fb81-4b70-a8db-a4aa35a35261", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c7ccd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728", Pod:"calico-apiserver-748c7ccd65-nl8pm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16608ba41f7", MAC:"1a:5b:50:c1:32:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:06.508861 containerd[1620]: 2025-09-13 00:11:06.501 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nl8pm" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:06.524846 containerd[1620]: time="2025-09-13T00:11:06.524771248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:06.524957 containerd[1620]: time="2025-09-13T00:11:06.524857079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:06.524957 containerd[1620]: time="2025-09-13T00:11:06.524885433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:06.525408 containerd[1620]: time="2025-09-13T00:11:06.525368518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:06.572653 containerd[1620]: time="2025-09-13T00:11:06.572607450Z" level=info msg="CreateContainer within sandbox \"546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23be9e02afb869b9a053c894a251da9b9a1b177be639e8bad504e9025159e4f8\"" Sep 13 00:11:06.573579 containerd[1620]: time="2025-09-13T00:11:06.573543525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c7ccd65-nl8pm,Uid:cc241816-fb81-4b70-a8db-a4aa35a35261,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728\"" Sep 13 00:11:06.574375 containerd[1620]: time="2025-09-13T00:11:06.574102043Z" level=info msg="StartContainer for \"23be9e02afb869b9a053c894a251da9b9a1b177be639e8bad504e9025159e4f8\"" Sep 13 00:11:06.619286 containerd[1620]: time="2025-09-13T00:11:06.619257257Z" level=info msg="StartContainer for \"23be9e02afb869b9a053c894a251da9b9a1b177be639e8bad504e9025159e4f8\" returns successfully" Sep 13 00:11:07.129481 containerd[1620]: time="2025-09-13T00:11:07.129430377Z" level=info msg="StopPodSandbox for \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\"" Sep 13 00:11:07.135385 containerd[1620]: time="2025-09-13T00:11:07.133684623Z" level=info msg="StopPodSandbox for \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\"" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.223 [INFO][4726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.224 [INFO][4726] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" iface="eth0" netns="/var/run/netns/cni-73aaa470-c7bd-41f0-8846-285053e31f34" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.224 [INFO][4726] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" iface="eth0" netns="/var/run/netns/cni-73aaa470-c7bd-41f0-8846-285053e31f34" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.225 [INFO][4726] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" iface="eth0" netns="/var/run/netns/cni-73aaa470-c7bd-41f0-8846-285053e31f34" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.225 [INFO][4726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.225 [INFO][4726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.264 [INFO][4740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.264 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.265 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.270 [WARNING][4740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.270 [INFO][4740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.271 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:07.277735 containerd[1620]: 2025-09-13 00:11:07.275 [INFO][4726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:07.280412 containerd[1620]: time="2025-09-13T00:11:07.279920368Z" level=info msg="TearDown network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\" successfully" Sep 13 00:11:07.280412 containerd[1620]: time="2025-09-13T00:11:07.279948291Z" level=info msg="StopPodSandbox for \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\" returns successfully" Sep 13 00:11:07.281156 systemd[1]: run-netns-cni\x2d73aaa470\x2dc7bd\x2d41f0\x2d8846\x2d285053e31f34.mount: Deactivated successfully. Sep 13 00:11:07.283941 containerd[1620]: time="2025-09-13T00:11:07.283562026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q6rw4,Uid:33f7161a-ca41-4c6b-95d5-d5f552f3a553,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.223 [INFO][4725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.224 [INFO][4725] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" iface="eth0" netns="/var/run/netns/cni-aa5d4e81-b92a-87f3-0ba5-bf2a5d58687e" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.224 [INFO][4725] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" iface="eth0" netns="/var/run/netns/cni-aa5d4e81-b92a-87f3-0ba5-bf2a5d58687e" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.224 [INFO][4725] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" iface="eth0" netns="/var/run/netns/cni-aa5d4e81-b92a-87f3-0ba5-bf2a5d58687e" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.224 [INFO][4725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.224 [INFO][4725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.268 [INFO][4739] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.268 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.271 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.277 [WARNING][4739] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.278 [INFO][4739] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.283 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:07.288073 containerd[1620]: 2025-09-13 00:11:07.285 [INFO][4725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:07.289252 containerd[1620]: time="2025-09-13T00:11:07.288212185Z" level=info msg="TearDown network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\" successfully" Sep 13 00:11:07.289252 containerd[1620]: time="2025-09-13T00:11:07.288248684Z" level=info msg="StopPodSandbox for \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\" returns successfully" Sep 13 00:11:07.289982 containerd[1620]: time="2025-09-13T00:11:07.289748046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcbcd6df9-pltxz,Uid:e1724762-f3a5-4a7f-9c75-353a81a041e5,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:11:07.291828 systemd[1]: run-netns-cni\x2daa5d4e81\x2db92a\x2d87f3\x2d0ba5\x2dbf2a5d58687e.mount: Deactivated successfully. Sep 13 00:11:07.464284 systemd-networkd[1257]: cali36be0794c39: Link UP Sep 13 00:11:07.468024 systemd-networkd[1257]: cali36be0794c39: Gained carrier Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.367 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0 calico-apiserver-bcbcd6df9- calico-apiserver e1724762-f3a5-4a7f-9c75-353a81a041e5 974 0 2025-09-13 00:10:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bcbcd6df9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e calico-apiserver-bcbcd6df9-pltxz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali36be0794c39 [] [] }} ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-pltxz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.367 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-pltxz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.401 [INFO][4775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.402 [INFO][4775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-662926fb9e", "pod":"calico-apiserver-bcbcd6df9-pltxz", "timestamp":"2025-09-13 00:11:07.401845954 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.402 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.402 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.402 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.410 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.419 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.428 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.433 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.437 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.437 [INFO][4775] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.441 [INFO][4775] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.445 [INFO][4775] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.455 [INFO][4775] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.197/26] block=192.168.28.192/26 handle="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.456 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.197/26] handle="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.456 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:07.505338 containerd[1620]: 2025-09-13 00:11:07.456 [INFO][4775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.197/26] IPv6=[] ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.506772 containerd[1620]: 2025-09-13 00:11:07.460 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-pltxz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0", GenerateName:"calico-apiserver-bcbcd6df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1724762-f3a5-4a7f-9c75-353a81a041e5", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcbcd6df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"calico-apiserver-bcbcd6df9-pltxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36be0794c39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:07.506772 containerd[1620]: 2025-09-13 00:11:07.460 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.197/32] ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-pltxz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.506772 containerd[1620]: 2025-09-13 00:11:07.460 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36be0794c39 ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-pltxz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.506772 containerd[1620]: 2025-09-13 00:11:07.467 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-pltxz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.506772 containerd[1620]: 2025-09-13 00:11:07.467 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-pltxz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0", GenerateName:"calico-apiserver-bcbcd6df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1724762-f3a5-4a7f-9c75-353a81a041e5", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcbcd6df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e", Pod:"calico-apiserver-bcbcd6df9-pltxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36be0794c39", MAC:"92:e0:b5:c1:c9:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:07.506772 containerd[1620]: 2025-09-13 00:11:07.499 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-pltxz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:07.547493 kubelet[2755]: I0913 00:11:07.546252 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2m6jh" podStartSLOduration=37.546232562 podStartE2EDuration="37.546232562s" podCreationTimestamp="2025-09-13 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:07.513263337 +0000 UTC m=+42.505372653" watchObservedRunningTime="2025-09-13 00:11:07.546232562 +0000 UTC m=+42.538341878" Sep 13 00:11:07.580009 containerd[1620]: time="2025-09-13T00:11:07.579685084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:07.584009 containerd[1620]: time="2025-09-13T00:11:07.580822477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:07.584009 containerd[1620]: time="2025-09-13T00:11:07.580861249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:07.584009 containerd[1620]: time="2025-09-13T00:11:07.580967438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:07.620235 systemd-networkd[1257]: cali252f60d1474: Link UP Sep 13 00:11:07.622008 systemd-networkd[1257]: cali252f60d1474: Gained carrier Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.365 [INFO][4751] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0 goldmane-7988f88666- calico-system 33f7161a-ca41-4c6b-95d5-d5f552f3a553 973 0 2025-09-13 00:10:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e goldmane-7988f88666-q6rw4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali252f60d1474 [] [] }} ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Namespace="calico-system" Pod="goldmane-7988f88666-q6rw4" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.366 [INFO][4751] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Namespace="calico-system" Pod="goldmane-7988f88666-q6rw4" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.401 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" HandleID="k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.402 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" HandleID="k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-662926fb9e", "pod":"goldmane-7988f88666-q6rw4", "timestamp":"2025-09-13 00:11:07.401859068 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.402 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.456 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.456 [INFO][4778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.516 [INFO][4778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.542 [INFO][4778] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.574 [INFO][4778] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.578 [INFO][4778] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.585 [INFO][4778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.586 [INFO][4778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.588 [INFO][4778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38 Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.593 [INFO][4778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.604 [INFO][4778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.198/26] block=192.168.28.192/26 handle="k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.604 [INFO][4778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.198/26] handle="k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.604 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:07.643450 containerd[1620]: 2025-09-13 00:11:07.605 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.198/26] IPv6=[] ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" HandleID="k8s-pod-network.7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.644364 containerd[1620]: 2025-09-13 00:11:07.612 [INFO][4751] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Namespace="calico-system" Pod="goldmane-7988f88666-q6rw4" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"33f7161a-ca41-4c6b-95d5-d5f552f3a553", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"goldmane-7988f88666-q6rw4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali252f60d1474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:07.644364 containerd[1620]: 2025-09-13 00:11:07.612 [INFO][4751] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.198/32] ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Namespace="calico-system" Pod="goldmane-7988f88666-q6rw4" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.644364 containerd[1620]: 2025-09-13 00:11:07.612 [INFO][4751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali252f60d1474 ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Namespace="calico-system" Pod="goldmane-7988f88666-q6rw4" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.644364 containerd[1620]: 2025-09-13 00:11:07.621 [INFO][4751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Namespace="calico-system" Pod="goldmane-7988f88666-q6rw4" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.644364 containerd[1620]: 2025-09-13 00:11:07.623 [INFO][4751] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Namespace="calico-system" Pod="goldmane-7988f88666-q6rw4" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"33f7161a-ca41-4c6b-95d5-d5f552f3a553", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38", Pod:"goldmane-7988f88666-q6rw4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali252f60d1474", MAC:"1a:2a:48:6b:e9:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:07.644364 containerd[1620]: 2025-09-13 00:11:07.637 [INFO][4751] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38" Namespace="calico-system" Pod="goldmane-7988f88666-q6rw4" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:07.662584 containerd[1620]: time="2025-09-13T00:11:07.662514446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:07.662693 containerd[1620]: time="2025-09-13T00:11:07.662591391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:07.662693 containerd[1620]: time="2025-09-13T00:11:07.662626065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:07.662736 containerd[1620]: time="2025-09-13T00:11:07.662711565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:07.695744 containerd[1620]: time="2025-09-13T00:11:07.695679008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcbcd6df9-pltxz,Uid:e1724762-f3a5-4a7f-9c75-353a81a041e5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\"" Sep 13 00:11:07.748642 containerd[1620]: time="2025-09-13T00:11:07.748480568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-q6rw4,Uid:33f7161a-ca41-4c6b-95d5-d5f552f3a553,Namespace:calico-system,Attempt:1,} returns sandbox id \"7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38\"" Sep 13 00:11:07.760507 systemd-networkd[1257]: cali75f4e169236: Gained IPv6LL Sep 13 00:11:08.016115 systemd-networkd[1257]: cali16608ba41f7: Gained IPv6LL Sep 13 00:11:08.128648 containerd[1620]: time="2025-09-13T00:11:08.127685679Z" level=info msg="StopPodSandbox for \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\"" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.180 [INFO][4906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.181 [INFO][4906] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" iface="eth0" netns="/var/run/netns/cni-24596b6e-8a30-3b68-e443-fc34c3de878d" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.181 [INFO][4906] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" iface="eth0" netns="/var/run/netns/cni-24596b6e-8a30-3b68-e443-fc34c3de878d" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.181 [INFO][4906] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" iface="eth0" netns="/var/run/netns/cni-24596b6e-8a30-3b68-e443-fc34c3de878d" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.181 [INFO][4906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.181 [INFO][4906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.212 [INFO][4913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.212 [INFO][4913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.212 [INFO][4913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.219 [WARNING][4913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.219 [INFO][4913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.222 [INFO][4913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:08.232457 containerd[1620]: 2025-09-13 00:11:08.226 [INFO][4906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:08.232457 containerd[1620]: time="2025-09-13T00:11:08.230464885Z" level=info msg="TearDown network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\" successfully" Sep 13 00:11:08.232457 containerd[1620]: time="2025-09-13T00:11:08.230519117Z" level=info msg="StopPodSandbox for \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\" returns successfully" Sep 13 00:11:08.235648 systemd[1]: run-netns-cni\x2d24596b6e\x2d8a30\x2d3b68\x2de443\x2dfc34c3de878d.mount: Deactivated successfully. Sep 13 00:11:08.238026 containerd[1620]: time="2025-09-13T00:11:08.237743742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9jlsq,Uid:398f713a-e38d-4416-8b6a-bb19b2e75262,Namespace:kube-system,Attempt:1,}" Sep 13 00:11:08.363606 systemd-networkd[1257]: calic229c22f614: Link UP Sep 13 00:11:08.363760 systemd-networkd[1257]: calic229c22f614: Gained carrier Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.280 [INFO][4919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0 coredns-7c65d6cfc9- kube-system 398f713a-e38d-4416-8b6a-bb19b2e75262 994 0 2025-09-13 00:10:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e coredns-7c65d6cfc9-9jlsq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic229c22f614 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9jlsq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.280 [INFO][4919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9jlsq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.307 [INFO][4932] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" HandleID="k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.307 [INFO][4932] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" HandleID="k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-662926fb9e", "pod":"coredns-7c65d6cfc9-9jlsq", "timestamp":"2025-09-13 00:11:08.307748427 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.308 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.308 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.308 [INFO][4932] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.315 [INFO][4932] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.320 [INFO][4932] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.325 [INFO][4932] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.329 [INFO][4932] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.332 [INFO][4932] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.332 [INFO][4932] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.334 [INFO][4932] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.339 [INFO][4932] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.352 [INFO][4932] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.199/26] block=192.168.28.192/26 handle="k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.352 [INFO][4932] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.199/26] handle="k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.352 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:08.396044 containerd[1620]: 2025-09-13 00:11:08.352 [INFO][4932] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.199/26] IPv6=[] ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" HandleID="k8s-pod-network.63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.400958 containerd[1620]: 2025-09-13 00:11:08.355 [INFO][4919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9jlsq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"398f713a-e38d-4416-8b6a-bb19b2e75262", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"coredns-7c65d6cfc9-9jlsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic229c22f614", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:08.400958 containerd[1620]: 2025-09-13 00:11:08.356 [INFO][4919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.199/32] ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9jlsq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.400958 containerd[1620]: 2025-09-13 00:11:08.356 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic229c22f614 ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9jlsq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.400958 containerd[1620]: 2025-09-13 00:11:08.363 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9jlsq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.400958 containerd[1620]: 2025-09-13 00:11:08.364 [INFO][4919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9jlsq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"398f713a-e38d-4416-8b6a-bb19b2e75262", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a", Pod:"coredns-7c65d6cfc9-9jlsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic229c22f614", MAC:"ce:dd:17:b9:3e:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:08.400958 containerd[1620]: 2025-09-13 00:11:08.384 [INFO][4919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9jlsq" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:08.461149 containerd[1620]: time="2025-09-13T00:11:08.458706043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:08.461149 containerd[1620]: time="2025-09-13T00:11:08.458763591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:08.461149 containerd[1620]: time="2025-09-13T00:11:08.458776726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:08.461149 containerd[1620]: time="2025-09-13T00:11:08.458862527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:08.553761 containerd[1620]: time="2025-09-13T00:11:08.553664043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9jlsq,Uid:398f713a-e38d-4416-8b6a-bb19b2e75262,Namespace:kube-system,Attempt:1,} returns sandbox id \"63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a\"" Sep 13 00:11:08.559535 containerd[1620]: time="2025-09-13T00:11:08.559062014Z" level=info msg="CreateContainer within sandbox \"63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:11:08.572077 containerd[1620]: time="2025-09-13T00:11:08.572044436Z" level=info msg="CreateContainer within sandbox \"63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddbd86cf6094941f426609b555b0612659a935567b17c4f2ed874365b5d6a961\"" Sep 13 00:11:08.574903 containerd[1620]: time="2025-09-13T00:11:08.573096018Z" level=info msg="StartContainer for \"ddbd86cf6094941f426609b555b0612659a935567b17c4f2ed874365b5d6a961\"" Sep 13 00:11:08.640517 containerd[1620]: time="2025-09-13T00:11:08.640267721Z" level=info msg="StartContainer for \"ddbd86cf6094941f426609b555b0612659a935567b17c4f2ed874365b5d6a961\" returns successfully" Sep 13 00:11:08.656192 systemd-networkd[1257]: cali36be0794c39: Gained IPv6LL Sep 13 00:11:08.784031 systemd-networkd[1257]: cali252f60d1474: Gained IPv6LL Sep 13 00:11:09.130562 containerd[1620]: time="2025-09-13T00:11:09.130520893Z" level=info msg="StopPodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\"" Sep 13 00:11:09.136534 containerd[1620]: time="2025-09-13T00:11:09.136241309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:11:09.136534 containerd[1620]: time="2025-09-13T00:11:09.136379438Z" level=info msg="StopPodSandbox for \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\"" Sep 13 00:11:09.136534 containerd[1620]: time="2025-09-13T00:11:09.136387193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:09.139532 containerd[1620]: time="2025-09-13T00:11:09.139496242Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:09.142863 containerd[1620]: time="2025-09-13T00:11:09.142572830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:09.144120 containerd[1620]: time="2025-09-13T00:11:09.144084795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.653083758s" Sep 13 00:11:09.144494 containerd[1620]: time="2025-09-13T00:11:09.144119109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:11:09.147610 containerd[1620]: time="2025-09-13T00:11:09.147525916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:11:09.180619 containerd[1620]: time="2025-09-13T00:11:09.180576805Z" level=info msg="CreateContainer within sandbox \"3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:11:09.208069 containerd[1620]: time="2025-09-13T00:11:09.208000611Z" level=info msg="CreateContainer within sandbox \"3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d41126440da7ae9dcdc487e20307e55ec06e3f097176feae3c3eff6f796b5650\"" Sep 13 00:11:09.211336 containerd[1620]: time="2025-09-13T00:11:09.210819136Z" level=info msg="StartContainer for \"d41126440da7ae9dcdc487e20307e55ec06e3f097176feae3c3eff6f796b5650\"" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.229 [INFO][5045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.230 [INFO][5045] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" iface="eth0" netns="/var/run/netns/cni-ef1c1539-7255-afe9-4b2f-f8ca50ccc716" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.230 [INFO][5045] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" iface="eth0" netns="/var/run/netns/cni-ef1c1539-7255-afe9-4b2f-f8ca50ccc716" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.230 [INFO][5045] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" iface="eth0" netns="/var/run/netns/cni-ef1c1539-7255-afe9-4b2f-f8ca50ccc716" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.230 [INFO][5045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.230 [INFO][5045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.282 [INFO][5075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.283 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.283 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.289 [WARNING][5075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.289 [INFO][5075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.291 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:09.295562 containerd[1620]: 2025-09-13 00:11:09.294 [INFO][5045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:09.296970 containerd[1620]: time="2025-09-13T00:11:09.296731486Z" level=info msg="TearDown network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" successfully" Sep 13 00:11:09.296970 containerd[1620]: time="2025-09-13T00:11:09.296756813Z" level=info msg="StopPodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" returns successfully" Sep 13 00:11:09.298532 containerd[1620]: time="2025-09-13T00:11:09.298406406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcbcd6df9-cwbjt,Uid:1b932375-cdcd-4a82-b528-b3a99b684eeb,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.242 [INFO][5052] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.242 [INFO][5052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" iface="eth0" netns="/var/run/netns/cni-3c3f1bc9-be34-0f94-7b96-11ab2c7401b7" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.243 [INFO][5052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" iface="eth0" netns="/var/run/netns/cni-3c3f1bc9-be34-0f94-7b96-11ab2c7401b7" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.243 [INFO][5052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" iface="eth0" netns="/var/run/netns/cni-3c3f1bc9-be34-0f94-7b96-11ab2c7401b7" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.244 [INFO][5052] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.244 [INFO][5052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.283 [INFO][5089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.283 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.291 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.299 [WARNING][5089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.300 [INFO][5089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.302 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:09.309159 containerd[1620]: 2025-09-13 00:11:09.306 [INFO][5052] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:09.309925 containerd[1620]: time="2025-09-13T00:11:09.309497812Z" level=info msg="TearDown network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\" successfully" Sep 13 00:11:09.309925 containerd[1620]: time="2025-09-13T00:11:09.309534781Z" level=info msg="StopPodSandbox for \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\" returns successfully" Sep 13 00:11:09.310486 containerd[1620]: time="2025-09-13T00:11:09.310466899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rrtz,Uid:f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:09.314254 containerd[1620]: time="2025-09-13T00:11:09.314123254Z" level=info msg="StartContainer for \"d41126440da7ae9dcdc487e20307e55ec06e3f097176feae3c3eff6f796b5650\" returns successfully" Sep 13 00:11:09.464664 systemd-networkd[1257]: calie18029b8356: Link UP Sep 13 00:11:09.466858 systemd-networkd[1257]: calie18029b8356: Gained carrier Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.368 [INFO][5117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0 calico-apiserver-bcbcd6df9- calico-apiserver 1b932375-cdcd-4a82-b528-b3a99b684eeb 1007 0 2025-09-13 00:10:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bcbcd6df9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e calico-apiserver-bcbcd6df9-cwbjt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie18029b8356 [] [] }} ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-cwbjt" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.370 [INFO][5117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-cwbjt" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.406 [INFO][5153] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.406 [INFO][5153] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-662926fb9e", "pod":"calico-apiserver-bcbcd6df9-cwbjt", "timestamp":"2025-09-13 00:11:09.406204409 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.406 [INFO][5153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.406 [INFO][5153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.406 [INFO][5153] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.413 [INFO][5153] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.429 [INFO][5153] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.436 [INFO][5153] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.437 [INFO][5153] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.440 [INFO][5153] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.440 [INFO][5153] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.442 [INFO][5153] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.447 [INFO][5153] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.455 [INFO][5153] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.200/26] block=192.168.28.192/26 handle="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.455 [INFO][5153] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.200/26] handle="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.455 [INFO][5153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:09.480458 containerd[1620]: 2025-09-13 00:11:09.455 [INFO][5153] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.200/26] IPv6=[] ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.481125 containerd[1620]: 2025-09-13 00:11:09.458 [INFO][5117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-cwbjt" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0", GenerateName:"calico-apiserver-bcbcd6df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b932375-cdcd-4a82-b528-b3a99b684eeb", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcbcd6df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"calico-apiserver-bcbcd6df9-cwbjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie18029b8356", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:09.481125 containerd[1620]: 2025-09-13 00:11:09.459 [INFO][5117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.200/32] ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-cwbjt" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.481125 containerd[1620]: 2025-09-13 00:11:09.459 [INFO][5117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie18029b8356 ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-cwbjt" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.481125 containerd[1620]: 2025-09-13 00:11:09.466 [INFO][5117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-cwbjt" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.481125 containerd[1620]: 2025-09-13 00:11:09.468 [INFO][5117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-cwbjt" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0", GenerateName:"calico-apiserver-bcbcd6df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b932375-cdcd-4a82-b528-b3a99b684eeb", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcbcd6df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f", Pod:"calico-apiserver-bcbcd6df9-cwbjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie18029b8356", MAC:"7a:21:4a:cf:78:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:09.481125 containerd[1620]: 2025-09-13 00:11:09.477 [INFO][5117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Namespace="calico-apiserver" Pod="calico-apiserver-bcbcd6df9-cwbjt" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:09.504897 containerd[1620]: time="2025-09-13T00:11:09.504407031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:09.504897 containerd[1620]: time="2025-09-13T00:11:09.504486340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:09.504897 containerd[1620]: time="2025-09-13T00:11:09.504529762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:09.504897 containerd[1620]: time="2025-09-13T00:11:09.504643635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:09.552328 kubelet[2755]: I0913 00:11:09.550589 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9jlsq" podStartSLOduration=39.550571466 podStartE2EDuration="39.550571466s" podCreationTimestamp="2025-09-13 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:09.550079253 +0000 UTC m=+44.542188569" watchObservedRunningTime="2025-09-13 00:11:09.550571466 +0000 UTC m=+44.542680802" Sep 13 00:11:09.572222 kubelet[2755]: I0913 00:11:09.569620 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dc566c86b-xkhhq" podStartSLOduration=23.844525113 podStartE2EDuration="27.56959268s" podCreationTimestamp="2025-09-13 00:10:42 +0000 UTC" firstStartedPulling="2025-09-13 00:11:05.420927709 +0000 UTC m=+40.413037026" lastFinishedPulling="2025-09-13 00:11:09.145995276 +0000 UTC m=+44.138104593" observedRunningTime="2025-09-13 00:11:09.567568224 +0000 UTC m=+44.559677539" watchObservedRunningTime="2025-09-13 00:11:09.56959268 +0000 UTC m=+44.561701996" Sep 13 00:11:09.609189 systemd-networkd[1257]: cali1b0ffc4bff8: Link UP Sep 13 00:11:09.609495 systemd-networkd[1257]: cali1b0ffc4bff8: Gained carrier Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.363 [INFO][5126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0 csi-node-driver- calico-system f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2 1008 0 2025-09-13 00:10:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e csi-node-driver-2rrtz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1b0ffc4bff8 [] [] }} ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Namespace="calico-system" Pod="csi-node-driver-2rrtz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.363 [INFO][5126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Namespace="calico-system" Pod="csi-node-driver-2rrtz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.411 [INFO][5148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" HandleID="k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.411 [INFO][5148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" HandleID="k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-662926fb9e", "pod":"csi-node-driver-2rrtz", "timestamp":"2025-09-13 00:11:09.411507462 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.411 [INFO][5148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.455 [INFO][5148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.455 [INFO][5148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.514 [INFO][5148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.531 [INFO][5148] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.539 [INFO][5148] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.541 [INFO][5148] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.551 [INFO][5148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.551 [INFO][5148] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.556 [INFO][5148] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.574 [INFO][5148] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.589 [INFO][5148] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.201/26] block=192.168.28.192/26 handle="k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.589 [INFO][5148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.201/26] handle="k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.589 [INFO][5148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:09.631103 containerd[1620]: 2025-09-13 00:11:09.589 [INFO][5148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.201/26] IPv6=[] ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" HandleID="k8s-pod-network.62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.633217 containerd[1620]: 2025-09-13 00:11:09.597 [INFO][5126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Namespace="calico-system" Pod="csi-node-driver-2rrtz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"csi-node-driver-2rrtz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b0ffc4bff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:09.633217 containerd[1620]: 2025-09-13 00:11:09.600 [INFO][5126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.201/32] ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Namespace="calico-system" Pod="csi-node-driver-2rrtz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.633217 containerd[1620]: 2025-09-13 00:11:09.600 [INFO][5126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b0ffc4bff8 ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Namespace="calico-system" Pod="csi-node-driver-2rrtz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.633217 containerd[1620]: 2025-09-13 00:11:09.611 [INFO][5126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Namespace="calico-system" Pod="csi-node-driver-2rrtz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.633217 containerd[1620]: 2025-09-13 00:11:09.612 [INFO][5126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Namespace="calico-system" Pod="csi-node-driver-2rrtz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f", Pod:"csi-node-driver-2rrtz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b0ffc4bff8", MAC:"3e:11:88:8a:b2:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:09.633217 containerd[1620]: 2025-09-13 00:11:09.622 [INFO][5126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f" Namespace="calico-system" Pod="csi-node-driver-2rrtz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:09.695414 containerd[1620]: time="2025-09-13T00:11:09.686219807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:09.695414 containerd[1620]: time="2025-09-13T00:11:09.693885550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:09.695414 containerd[1620]: time="2025-09-13T00:11:09.693905307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:09.695414 containerd[1620]: time="2025-09-13T00:11:09.694009663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:09.725706 containerd[1620]: time="2025-09-13T00:11:09.725584453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcbcd6df9-cwbjt,Uid:1b932375-cdcd-4a82-b528-b3a99b684eeb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\"" Sep 13 00:11:09.752094 containerd[1620]: time="2025-09-13T00:11:09.752006171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rrtz,Uid:f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2,Namespace:calico-system,Attempt:1,} returns sandbox id \"62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f\"" Sep 13 00:11:09.766402 systemd[1]: run-netns-cni\x2d3c3f1bc9\x2dbe34\x2d0f94\x2d7b96\x2d11ab2c7401b7.mount: Deactivated successfully. Sep 13 00:11:09.766525 systemd[1]: run-netns-cni\x2def1c1539\x2d7255\x2dafe9\x2d4b2f\x2df8ca50ccc716.mount: Deactivated successfully. Sep 13 00:11:10.383606 systemd-networkd[1257]: calic229c22f614: Gained IPv6LL Sep 13 00:11:10.640032 systemd-networkd[1257]: calie18029b8356: Gained IPv6LL Sep 13 00:11:11.151631 systemd-networkd[1257]: cali1b0ffc4bff8: Gained IPv6LL Sep 13 00:11:12.158177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220435442.mount: Deactivated successfully. Sep 13 00:11:12.179585 containerd[1620]: time="2025-09-13T00:11:12.178892099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:12.179943 containerd[1620]: time="2025-09-13T00:11:12.179912111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:11:12.180727 containerd[1620]: time="2025-09-13T00:11:12.180681976Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:12.182702 containerd[1620]: time="2025-09-13T00:11:12.182665364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:12.183379 containerd[1620]: time="2025-09-13T00:11:12.183343996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.035785489s" Sep 13 00:11:12.183451 containerd[1620]: time="2025-09-13T00:11:12.183437221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:11:12.184755 containerd[1620]: time="2025-09-13T00:11:12.184723192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:11:12.186599 containerd[1620]: time="2025-09-13T00:11:12.186541100Z" level=info msg="CreateContainer within sandbox \"eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:11:12.202499 containerd[1620]: time="2025-09-13T00:11:12.202404362Z" level=info msg="CreateContainer within sandbox \"eba543ee09d77df4d38f36855db877291aab7ebc9dbc5309301b78879f03298f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dabef8c91e6a8907639a5ad206af88e1b574ed1814fd21423c6c4a3a78216868\"" Sep 13 00:11:12.203408 containerd[1620]: time="2025-09-13T00:11:12.202984971Z" level=info msg="StartContainer for \"dabef8c91e6a8907639a5ad206af88e1b574ed1814fd21423c6c4a3a78216868\"" Sep 13 00:11:12.204164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295202852.mount: Deactivated successfully. Sep 13 00:11:12.340353 containerd[1620]: time="2025-09-13T00:11:12.340265596Z" level=info msg="StartContainer for \"dabef8c91e6a8907639a5ad206af88e1b574ed1814fd21423c6c4a3a78216868\" returns successfully" Sep 13 00:11:15.535529 containerd[1620]: time="2025-09-13T00:11:15.535476394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:15.536437 containerd[1620]: time="2025-09-13T00:11:15.536327219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:11:15.555580 containerd[1620]: time="2025-09-13T00:11:15.555449350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.370688417s" Sep 13 00:11:15.555580 containerd[1620]: time="2025-09-13T00:11:15.555489876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:15.559262 containerd[1620]: time="2025-09-13T00:11:15.559232402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:11:15.564711 containerd[1620]: time="2025-09-13T00:11:15.564171222Z" level=info msg="CreateContainer within sandbox \"a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:15.570071 containerd[1620]: time="2025-09-13T00:11:15.569981195Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:15.570973 containerd[1620]: time="2025-09-13T00:11:15.570954870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:15.577071 containerd[1620]: time="2025-09-13T00:11:15.576903342Z" level=info msg="CreateContainer within sandbox \"a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56568229a107e012106261d6c1af086346cfc7bf7122ec16a27947a348db0a9d\"" Sep 13 00:11:15.586673 containerd[1620]: time="2025-09-13T00:11:15.585756051Z" level=info msg="StartContainer for \"56568229a107e012106261d6c1af086346cfc7bf7122ec16a27947a348db0a9d\"" Sep 13 00:11:15.661456 containerd[1620]: time="2025-09-13T00:11:15.661403287Z" level=info msg="StartContainer for \"56568229a107e012106261d6c1af086346cfc7bf7122ec16a27947a348db0a9d\" returns successfully" Sep 13 00:11:16.471892 containerd[1620]: time="2025-09-13T00:11:16.471814002Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:16.473342 containerd[1620]: time="2025-09-13T00:11:16.472551895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:11:16.474613 containerd[1620]: time="2025-09-13T00:11:16.474564870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 915.296951ms" Sep 13 00:11:16.474613 containerd[1620]: time="2025-09-13T00:11:16.474611147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:16.476733 containerd[1620]: time="2025-09-13T00:11:16.475742768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:11:16.478148 containerd[1620]: time="2025-09-13T00:11:16.478109244Z" level=info msg="CreateContainer within sandbox \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:16.509335 containerd[1620]: time="2025-09-13T00:11:16.508875687Z" level=info msg="CreateContainer within sandbox \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\"" Sep 13 00:11:16.512262 containerd[1620]: time="2025-09-13T00:11:16.511487493Z" level=info msg="StartContainer for \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\"" Sep 13 00:11:16.608332 kubelet[2755]: I0913 00:11:16.607486 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748c7ccd65-nl8pm" podStartSLOduration=27.624317925 podStartE2EDuration="36.607299746s" podCreationTimestamp="2025-09-13 00:10:40 +0000 UTC" firstStartedPulling="2025-09-13 00:11:06.575167961 +0000 UTC m=+41.567277277" lastFinishedPulling="2025-09-13 00:11:15.558149772 +0000 UTC m=+50.550259098" observedRunningTime="2025-09-13 00:11:16.604526357 +0000 UTC m=+51.596635673" watchObservedRunningTime="2025-09-13 00:11:16.607299746 +0000 UTC m=+51.599409062" Sep 13 00:11:16.608332 kubelet[2755]: I0913 00:11:16.607661 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b7665d6d8-8bbnp" podStartSLOduration=5.988127017 podStartE2EDuration="14.607630716s" podCreationTimestamp="2025-09-13 00:11:02 +0000 UTC" firstStartedPulling="2025-09-13 00:11:03.56500986 +0000 UTC m=+38.557119176" lastFinishedPulling="2025-09-13 00:11:12.184513559 +0000 UTC m=+47.176622875" observedRunningTime="2025-09-13 00:11:12.572622017 +0000 UTC m=+47.564731374" watchObservedRunningTime="2025-09-13 00:11:16.607630716 +0000 UTC m=+51.599740053" Sep 13 00:11:16.663615 containerd[1620]: time="2025-09-13T00:11:16.663534477Z" level=info msg="StartContainer for \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\" returns successfully" Sep 13 00:11:16.935966 kubelet[2755]: I0913 00:11:16.935884 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddcqx\" (UniqueName: \"kubernetes.io/projected/7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18-kube-api-access-ddcqx\") pod \"calico-apiserver-748c7ccd65-nxtnz\" (UID: \"7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18\") " pod="calico-apiserver/calico-apiserver-748c7ccd65-nxtnz" Sep 13 00:11:16.936252 kubelet[2755]: I0913 00:11:16.936168 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18-calico-apiserver-certs\") pod \"calico-apiserver-748c7ccd65-nxtnz\" (UID: \"7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18\") " pod="calico-apiserver/calico-apiserver-748c7ccd65-nxtnz" Sep 13 00:11:17.189181 containerd[1620]: time="2025-09-13T00:11:17.189069851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c7ccd65-nxtnz,Uid:7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:11:17.667068 systemd-networkd[1257]: calif3fb3215edd: Link UP Sep 13 00:11:17.676704 systemd-networkd[1257]: calif3fb3215edd: Gained carrier Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.417 [INFO][5438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0 calico-apiserver-748c7ccd65- calico-apiserver 7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18 1088 0 2025-09-13 00:11:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748c7ccd65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-662926fb9e calico-apiserver-748c7ccd65-nxtnz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif3fb3215edd [] [] }} ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nxtnz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.420 [INFO][5438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nxtnz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.560 [INFO][5450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" HandleID="k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.563 [INFO][5450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" HandleID="k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e5b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-662926fb9e", "pod":"calico-apiserver-748c7ccd65-nxtnz", "timestamp":"2025-09-13 00:11:17.560041129 +0000 UTC"}, Hostname:"ci-4081-3-5-n-662926fb9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.563 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.563 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.564 [INFO][5450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-662926fb9e' Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.577 [INFO][5450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.601 [INFO][5450] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.608 [INFO][5450] ipam/ipam.go 511: Trying affinity for 192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.611 [INFO][5450] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.617 [INFO][5450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.617 [INFO][5450] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.620 [INFO][5450] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.629 [INFO][5450] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.639 [INFO][5450] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.202/26] block=192.168.28.192/26 handle="k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.639 [INFO][5450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.202/26] handle="k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" host="ci-4081-3-5-n-662926fb9e" Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.639 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:17.723016 containerd[1620]: 2025-09-13 00:11:17.640 [INFO][5450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.202/26] IPv6=[] ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" HandleID="k8s-pod-network.3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" Sep 13 00:11:17.725135 containerd[1620]: 2025-09-13 00:11:17.650 [INFO][5438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nxtnz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0", GenerateName:"calico-apiserver-748c7ccd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c7ccd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"", Pod:"calico-apiserver-748c7ccd65-nxtnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3fb3215edd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:17.725135 containerd[1620]: 2025-09-13 00:11:17.650 [INFO][5438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.202/32] ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nxtnz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" Sep 13 00:11:17.725135 containerd[1620]: 2025-09-13 00:11:17.651 [INFO][5438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3fb3215edd ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nxtnz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" Sep 13 00:11:17.725135 containerd[1620]: 2025-09-13 00:11:17.665 [INFO][5438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nxtnz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" Sep 13 00:11:17.725135 containerd[1620]: 2025-09-13 00:11:17.670 [INFO][5438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nxtnz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0", GenerateName:"calico-apiserver-748c7ccd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c7ccd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc", Pod:"calico-apiserver-748c7ccd65-nxtnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3fb3215edd", MAC:"f2:e3:fa:31:d6:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:17.725135 containerd[1620]: 2025-09-13 00:11:17.708 [INFO][5438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc" Namespace="calico-apiserver" Pod="calico-apiserver-748c7ccd65-nxtnz" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nxtnz-eth0" Sep 13 00:11:17.843691 containerd[1620]: time="2025-09-13T00:11:17.842135248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:17.843691 containerd[1620]: time="2025-09-13T00:11:17.843436187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:17.843691 containerd[1620]: time="2025-09-13T00:11:17.843466904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:17.845355 containerd[1620]: time="2025-09-13T00:11:17.844304054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:17.965162 containerd[1620]: time="2025-09-13T00:11:17.965123841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c7ccd65-nxtnz,Uid:7c3e1128-b3a6-4f0f-8f08-b1b8e7273a18,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc\"" Sep 13 00:11:17.993135 containerd[1620]: time="2025-09-13T00:11:17.993084102Z" level=info msg="CreateContainer within sandbox \"3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:18.026023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791041131.mount: Deactivated successfully. Sep 13 00:11:18.031358 containerd[1620]: time="2025-09-13T00:11:18.031279331Z" level=info msg="CreateContainer within sandbox \"3611858a1ec5fa5a0c2e0466424fcad6b9c671ba29f1c9102048300f459a21cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9e86b219782ee9267ca2f68d65390ec7809405e850d022f6b7323a41924502b1\"" Sep 13 00:11:18.034598 containerd[1620]: time="2025-09-13T00:11:18.033504172Z" level=info msg="StartContainer for \"9e86b219782ee9267ca2f68d65390ec7809405e850d022f6b7323a41924502b1\"" Sep 13 00:11:18.153273 containerd[1620]: time="2025-09-13T00:11:18.153240778Z" level=info msg="StartContainer for \"9e86b219782ee9267ca2f68d65390ec7809405e850d022f6b7323a41924502b1\" returns successfully" Sep 13 00:11:18.833596 systemd-networkd[1257]: calif3fb3215edd: Gained IPv6LL Sep 13 00:11:18.905627 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:11:18.900782 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:11:18.900817 systemd-resolved[1510]: Flushed all caches. Sep 13 00:11:18.957520 kubelet[2755]: I0913 00:11:18.957458 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bcbcd6df9-pltxz" podStartSLOduration=31.152708089 podStartE2EDuration="39.928340921s" podCreationTimestamp="2025-09-13 00:10:39 +0000 UTC" firstStartedPulling="2025-09-13 00:11:07.699847915 +0000 UTC m=+42.691957231" lastFinishedPulling="2025-09-13 00:11:16.475480737 +0000 UTC m=+51.467590063" observedRunningTime="2025-09-13 00:11:17.914199226 +0000 UTC m=+52.906308542" watchObservedRunningTime="2025-09-13 00:11:18.928340921 +0000 UTC m=+53.920450258" Sep 13 00:11:18.983165 kubelet[2755]: I0913 00:11:18.979971 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748c7ccd65-nxtnz" podStartSLOduration=2.979957136 podStartE2EDuration="2.979957136s" podCreationTimestamp="2025-09-13 00:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:18.957663879 +0000 UTC m=+53.949773195" watchObservedRunningTime="2025-09-13 00:11:18.979957136 +0000 UTC m=+53.972066452" Sep 13 00:11:19.858440 kubelet[2755]: I0913 00:11:19.858276 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:19.894877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705690127.mount: Deactivated successfully. Sep 13 00:11:20.384623 containerd[1620]: time="2025-09-13T00:11:20.384564744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:20.415865 containerd[1620]: time="2025-09-13T00:11:20.394832217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:11:20.435347 containerd[1620]: time="2025-09-13T00:11:20.434199202Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:20.436880 containerd[1620]: time="2025-09-13T00:11:20.436265272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:20.438357 containerd[1620]: time="2025-09-13T00:11:20.438257023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.961016834s" Sep 13 00:11:20.438357 containerd[1620]: time="2025-09-13T00:11:20.438327064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:11:20.466536 containerd[1620]: time="2025-09-13T00:11:20.466351504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:11:20.512733 containerd[1620]: time="2025-09-13T00:11:20.512692064Z" level=info msg="CreateContainer within sandbox \"7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:11:20.611824 containerd[1620]: time="2025-09-13T00:11:20.611777366Z" level=info msg="CreateContainer within sandbox \"7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7b378223794c6edb5e476fac8d0f2967ad4236b7681e1393c036afafb349a8cf\"" Sep 13 00:11:20.619179 containerd[1620]: time="2025-09-13T00:11:20.619148761Z" level=info msg="StartContainer for \"7b378223794c6edb5e476fac8d0f2967ad4236b7681e1393c036afafb349a8cf\"" Sep 13 00:11:20.838730 containerd[1620]: time="2025-09-13T00:11:20.838601103Z" level=info msg="StartContainer for \"7b378223794c6edb5e476fac8d0f2967ad4236b7681e1393c036afafb349a8cf\" returns successfully" Sep 13 00:11:20.937379 containerd[1620]: time="2025-09-13T00:11:20.937296870Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:20.938802 containerd[1620]: time="2025-09-13T00:11:20.938106128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:11:20.940428 containerd[1620]: time="2025-09-13T00:11:20.940384643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 472.83571ms" Sep 13 00:11:20.940428 containerd[1620]: time="2025-09-13T00:11:20.940416212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:20.941507 containerd[1620]: time="2025-09-13T00:11:20.941387151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:11:20.947382 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:11:20.943803 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:11:20.947840 containerd[1620]: time="2025-09-13T00:11:20.946784128Z" level=info msg="CreateContainer within sandbox \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:20.943828 systemd-resolved[1510]: Flushed all caches. Sep 13 00:11:20.963048 containerd[1620]: time="2025-09-13T00:11:20.962915197Z" level=info msg="CreateContainer within sandbox \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\"" Sep 13 00:11:20.965618 containerd[1620]: time="2025-09-13T00:11:20.964967610Z" level=info msg="StartContainer for \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\"" Sep 13 00:11:21.036545 containerd[1620]: time="2025-09-13T00:11:21.036506425Z" level=info msg="StartContainer for \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\" returns successfully" Sep 13 00:11:21.323561 systemd[1]: Started sshd@7-65.21.60.153:22-181.188.159.138:40226.service - OpenSSH per-connection server daemon (181.188.159.138:40226). Sep 13 00:11:22.210808 containerd[1620]: time="2025-09-13T00:11:22.210720534Z" level=info msg="StopContainer for \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\" with timeout 30 (s)" Sep 13 00:11:22.216336 containerd[1620]: time="2025-09-13T00:11:22.216284786Z" level=info msg="Stop container \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\" with signal terminated" Sep 13 00:11:22.328676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8-rootfs.mount: Deactivated successfully. Sep 13 00:11:22.336190 kubelet[2755]: I0913 00:11:22.335997 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bcbcd6df9-cwbjt" podStartSLOduration=32.110020866 podStartE2EDuration="43.322737131s" podCreationTimestamp="2025-09-13 00:10:39 +0000 UTC" firstStartedPulling="2025-09-13 00:11:09.728413467 +0000 UTC m=+44.720522783" lastFinishedPulling="2025-09-13 00:11:20.941129722 +0000 UTC m=+55.933239048" observedRunningTime="2025-09-13 00:11:22.149369527 +0000 UTC m=+57.141478842" watchObservedRunningTime="2025-09-13 00:11:22.322737131 +0000 UTC m=+57.314846447" Sep 13 00:11:22.342440 kubelet[2755]: I0913 00:11:22.342390 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-q6rw4" podStartSLOduration=27.642592685 podStartE2EDuration="40.342374766s" podCreationTimestamp="2025-09-13 00:10:42 +0000 UTC" firstStartedPulling="2025-09-13 00:11:07.750096969 +0000 UTC m=+42.742206295" lastFinishedPulling="2025-09-13 00:11:20.44987905 +0000 UTC m=+55.441988376" observedRunningTime="2025-09-13 00:11:22.268221014 +0000 UTC m=+57.260330330" watchObservedRunningTime="2025-09-13 00:11:22.342374766 +0000 UTC m=+57.334484082" Sep 13 00:11:22.354080 containerd[1620]: time="2025-09-13T00:11:22.332827941Z" level=info msg="shim disconnected" id=c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8 namespace=k8s.io Sep 13 00:11:22.354080 containerd[1620]: time="2025-09-13T00:11:22.354074995Z" level=warning msg="cleaning up after shim disconnected" id=c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8 namespace=k8s.io Sep 13 00:11:22.355574 containerd[1620]: time="2025-09-13T00:11:22.354090093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:22.384375 containerd[1620]: time="2025-09-13T00:11:22.384292209Z" level=info msg="StopContainer for \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\" returns successfully" Sep 13 00:11:22.392882 containerd[1620]: time="2025-09-13T00:11:22.392842637Z" level=info msg="StopPodSandbox for \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\"" Sep 13 00:11:22.398589 containerd[1620]: time="2025-09-13T00:11:22.398525900Z" level=info msg="Container to stop \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:11:22.402820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f-shm.mount: Deactivated successfully. Sep 13 00:11:22.463709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f-rootfs.mount: Deactivated successfully. Sep 13 00:11:22.471088 containerd[1620]: time="2025-09-13T00:11:22.470764123Z" level=info msg="shim disconnected" id=ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f namespace=k8s.io Sep 13 00:11:22.471088 containerd[1620]: time="2025-09-13T00:11:22.470984934Z" level=warning msg="cleaning up after shim disconnected" id=ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f namespace=k8s.io Sep 13 00:11:22.471088 containerd[1620]: time="2025-09-13T00:11:22.470995684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:22.673396 sshd[5644]: Invalid user css from 181.188.159.138 port 40226 Sep 13 00:11:22.713245 containerd[1620]: time="2025-09-13T00:11:22.712687334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:22.714162 containerd[1620]: time="2025-09-13T00:11:22.714130303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:11:22.715137 containerd[1620]: time="2025-09-13T00:11:22.715067501Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:22.720142 containerd[1620]: time="2025-09-13T00:11:22.720028879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:22.721001 containerd[1620]: time="2025-09-13T00:11:22.720855009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.778684086s" Sep 13 00:11:22.721440 containerd[1620]: time="2025-09-13T00:11:22.721422015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:11:22.733180 containerd[1620]: time="2025-09-13T00:11:22.733149394Z" level=info msg="CreateContainer within sandbox \"62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:11:22.755792 containerd[1620]: time="2025-09-13T00:11:22.755722230Z" level=info msg="CreateContainer within sandbox \"62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3b4ffb51ad46c0e2adc7e4694967a10f0c50ad952a86685f0b163ec8d2b393bc\"" Sep 13 00:11:22.768356 containerd[1620]: time="2025-09-13T00:11:22.767625258Z" level=info msg="StartContainer for \"3b4ffb51ad46c0e2adc7e4694967a10f0c50ad952a86685f0b163ec8d2b393bc\"" Sep 13 00:11:22.789518 systemd-networkd[1257]: calie18029b8356: Link DOWN Sep 13 00:11:22.789524 systemd-networkd[1257]: calie18029b8356: Lost carrier Sep 13 00:11:22.912591 containerd[1620]: time="2025-09-13T00:11:22.912559894Z" level=info msg="StartContainer for \"3b4ffb51ad46c0e2adc7e4694967a10f0c50ad952a86685f0b163ec8d2b393bc\" returns successfully" Sep 13 00:11:22.925413 sshd[5644]: Received disconnect from 181.188.159.138 port 40226:11: Bye Bye [preauth] Sep 13 00:11:22.925413 sshd[5644]: Disconnected from invalid user css 181.188.159.138 port 40226 [preauth] Sep 13 00:11:22.925699 systemd[1]: sshd@7-65.21.60.153:22-181.188.159.138:40226.service: Deactivated successfully. Sep 13 00:11:22.929759 containerd[1620]: time="2025-09-13T00:11:22.929334191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:11:22.960405 kubelet[2755]: I0913 00:11:22.960381 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:22.994827 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:11:22.994537 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:11:22.994567 systemd-resolved[1510]: Flushed all caches. Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:22.783 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:22.787 [INFO][5727] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" iface="eth0" netns="/var/run/netns/cni-a479c84a-18b2-03d4-af0e-59007ea018cf" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:22.788 [INFO][5727] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" iface="eth0" netns="/var/run/netns/cni-a479c84a-18b2-03d4-af0e-59007ea018cf" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:22.800 [INFO][5727] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" after=12.503286ms iface="eth0" netns="/var/run/netns/cni-a479c84a-18b2-03d4-af0e-59007ea018cf" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:22.800 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:22.800 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:23.042 [INFO][5753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:23.045 [INFO][5753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:23.046 [INFO][5753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:23.108 [INFO][5753] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:23.108 [INFO][5753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:23.110 [INFO][5753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:23.142903 containerd[1620]: 2025-09-13 00:11:23.113 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:23.144555 containerd[1620]: time="2025-09-13T00:11:23.144529444Z" level=info msg="TearDown network for sandbox \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\" successfully" Sep 13 00:11:23.144616 containerd[1620]: time="2025-09-13T00:11:23.144604886Z" level=info msg="StopPodSandbox for \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\" returns successfully" Sep 13 00:11:23.149139 containerd[1620]: time="2025-09-13T00:11:23.149123809Z" level=info msg="StopPodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\"" Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.215 [WARNING][5811] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0", GenerateName:"calico-apiserver-bcbcd6df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b932375-cdcd-4a82-b528-b3a99b684eeb", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcbcd6df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f", Pod:"calico-apiserver-bcbcd6df9-cwbjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie18029b8356", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.215 [INFO][5811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.215 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" iface="eth0" netns="" Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.215 [INFO][5811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.215 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.249 [INFO][5819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.249 [INFO][5819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.249 [INFO][5819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.258 [WARNING][5819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.258 [INFO][5819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.263 [INFO][5819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:23.284483 containerd[1620]: 2025-09-13 00:11:23.268 [INFO][5811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:23.284483 containerd[1620]: time="2025-09-13T00:11:23.284318702Z" level=info msg="TearDown network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" successfully" Sep 13 00:11:23.284483 containerd[1620]: time="2025-09-13T00:11:23.284351823Z" level=info msg="StopPodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" returns successfully" Sep 13 00:11:23.330395 systemd[1]: run-netns-cni\x2da479c84a\x2d18b2\x2d03d4\x2daf0e\x2d59007ea018cf.mount: Deactivated successfully. Sep 13 00:11:23.500479 kubelet[2755]: I0913 00:11:23.500427 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48c7b\" (UniqueName: \"kubernetes.io/projected/1b932375-cdcd-4a82-b528-b3a99b684eeb-kube-api-access-48c7b\") pod \"1b932375-cdcd-4a82-b528-b3a99b684eeb\" (UID: \"1b932375-cdcd-4a82-b528-b3a99b684eeb\") " Sep 13 00:11:23.500898 kubelet[2755]: I0913 00:11:23.500509 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b932375-cdcd-4a82-b528-b3a99b684eeb-calico-apiserver-certs\") pod \"1b932375-cdcd-4a82-b528-b3a99b684eeb\" (UID: \"1b932375-cdcd-4a82-b528-b3a99b684eeb\") " Sep 13 00:11:23.535581 systemd[1]: var-lib-kubelet-pods-1b932375\x2dcdcd\x2d4a82\x2db528\x2db3a99b684eeb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48c7b.mount: Deactivated successfully. Sep 13 00:11:23.535979 systemd[1]: var-lib-kubelet-pods-1b932375\x2dcdcd\x2d4a82\x2db528\x2db3a99b684eeb-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:11:23.538409 kubelet[2755]: I0913 00:11:23.538357 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b932375-cdcd-4a82-b528-b3a99b684eeb-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "1b932375-cdcd-4a82-b528-b3a99b684eeb" (UID: "1b932375-cdcd-4a82-b528-b3a99b684eeb"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:11:23.541132 kubelet[2755]: I0913 00:11:23.534049 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b932375-cdcd-4a82-b528-b3a99b684eeb-kube-api-access-48c7b" (OuterVolumeSpecName: "kube-api-access-48c7b") pod "1b932375-cdcd-4a82-b528-b3a99b684eeb" (UID: "1b932375-cdcd-4a82-b528-b3a99b684eeb"). InnerVolumeSpecName "kube-api-access-48c7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:11:23.601884 kubelet[2755]: I0913 00:11:23.601830 2755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48c7b\" (UniqueName: \"kubernetes.io/projected/1b932375-cdcd-4a82-b528-b3a99b684eeb-kube-api-access-48c7b\") on node \"ci-4081-3-5-n-662926fb9e\" DevicePath \"\"" Sep 13 00:11:23.601884 kubelet[2755]: I0913 00:11:23.601864 2755 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b932375-cdcd-4a82-b528-b3a99b684eeb-calico-apiserver-certs\") on node \"ci-4081-3-5-n-662926fb9e\" DevicePath \"\"" Sep 13 00:11:25.054346 containerd[1620]: time="2025-09-13T00:11:25.054273382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.055154 containerd[1620]: time="2025-09-13T00:11:25.055102158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:11:25.056974 containerd[1620]: time="2025-09-13T00:11:25.056031761Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.058149 containerd[1620]: time="2025-09-13T00:11:25.057654497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.058149 containerd[1620]: time="2025-09-13T00:11:25.058044986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.128671301s" Sep 13 00:11:25.058149 containerd[1620]: time="2025-09-13T00:11:25.058071204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:11:25.071075 containerd[1620]: time="2025-09-13T00:11:25.071042894Z" level=info msg="CreateContainer within sandbox \"62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:11:25.089748 containerd[1620]: time="2025-09-13T00:11:25.089706910Z" level=info msg="CreateContainer within sandbox \"62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6b11ef4e5a291d04e25560010d7ce29921e80abe6e5066c645458911ee3eef04\"" Sep 13 00:11:25.093502 containerd[1620]: time="2025-09-13T00:11:25.093470839Z" level=info msg="StartContainer for \"6b11ef4e5a291d04e25560010d7ce29921e80abe6e5066c645458911ee3eef04\"" Sep 13 00:11:25.173715 systemd[1]: run-containerd-runc-k8s.io-6b11ef4e5a291d04e25560010d7ce29921e80abe6e5066c645458911ee3eef04-runc.zHt6G6.mount: Deactivated successfully. Sep 13 00:11:25.205975 containerd[1620]: time="2025-09-13T00:11:25.205938844Z" level=info msg="StartContainer for \"6b11ef4e5a291d04e25560010d7ce29921e80abe6e5066c645458911ee3eef04\" returns successfully" Sep 13 00:11:25.294366 kubelet[2755]: I0913 00:11:25.294024 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b932375-cdcd-4a82-b528-b3a99b684eeb" path="/var/lib/kubelet/pods/1b932375-cdcd-4a82-b528-b3a99b684eeb/volumes" Sep 13 00:11:25.308398 kubelet[2755]: I0913 00:11:25.308064 2755 scope.go:117] "RemoveContainer" containerID="c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8" Sep 13 00:11:25.337203 containerd[1620]: time="2025-09-13T00:11:25.336985907Z" level=info msg="RemoveContainer for \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\"" Sep 13 00:11:25.348328 containerd[1620]: time="2025-09-13T00:11:25.348288685Z" level=info msg="RemoveContainer for \"c0fb7d5ee08f6634ae5c7d0f32c97a556240e1c5ce6f40de51bae40ba080e6e8\" returns successfully" Sep 13 00:11:25.359732 containerd[1620]: time="2025-09-13T00:11:25.359676421Z" level=info msg="StopPodSandbox for \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\"" Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.396 [WARNING][5904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"398f713a-e38d-4416-8b6a-bb19b2e75262", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a", Pod:"coredns-7c65d6cfc9-9jlsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic229c22f614", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.398 [INFO][5904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.398 [INFO][5904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" iface="eth0" netns="" Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.398 [INFO][5904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.398 [INFO][5904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.418 [INFO][5911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.418 [INFO][5911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.418 [INFO][5911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.423 [WARNING][5911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.423 [INFO][5911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.424 [INFO][5911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:25.428816 containerd[1620]: 2025-09-13 00:11:25.426 [INFO][5904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:25.428816 containerd[1620]: time="2025-09-13T00:11:25.428808831Z" level=info msg="TearDown network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\" successfully" Sep 13 00:11:25.430926 containerd[1620]: time="2025-09-13T00:11:25.428834640Z" level=info msg="StopPodSandbox for \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\" returns successfully" Sep 13 00:11:25.432609 containerd[1620]: time="2025-09-13T00:11:25.432586897Z" level=info msg="RemovePodSandbox for \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\"" Sep 13 00:11:25.435789 containerd[1620]: time="2025-09-13T00:11:25.435752972Z" level=info msg="Forcibly stopping sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\"" Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.473 [WARNING][5925] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"398f713a-e38d-4416-8b6a-bb19b2e75262", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"63b313bca3707b8bd61414e707551dc97861981cf0c4446707a50b77b9ef485a", Pod:"coredns-7c65d6cfc9-9jlsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic229c22f614", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.473 [INFO][5925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.473 [INFO][5925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" iface="eth0" netns="" Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.473 [INFO][5925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.473 [INFO][5925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.501 [INFO][5932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.501 [INFO][5932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.501 [INFO][5932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.507 [WARNING][5932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.507 [INFO][5932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" HandleID="k8s-pod-network.c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--9jlsq-eth0" Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.509 [INFO][5932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:25.515344 containerd[1620]: 2025-09-13 00:11:25.511 [INFO][5925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3" Sep 13 00:11:25.515344 containerd[1620]: time="2025-09-13T00:11:25.515284756Z" level=info msg="TearDown network for sandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\" successfully" Sep 13 00:11:25.522361 containerd[1620]: time="2025-09-13T00:11:25.522144207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:25.527131 containerd[1620]: time="2025-09-13T00:11:25.527021563Z" level=info msg="RemovePodSandbox \"c3fda8ec5af1631b308185d7299329dd24f3415ae6e194eae18c280c694899e3\" returns successfully" Sep 13 00:11:25.527595 containerd[1620]: time="2025-09-13T00:11:25.527568433Z" level=info msg="StopPodSandbox for \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\"" Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.590 [WARNING][5948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719", Pod:"coredns-7c65d6cfc9-2m6jh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75f4e169236", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.591 [INFO][5948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.591 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" iface="eth0" netns="" Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.591 [INFO][5948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.591 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.615 [INFO][5956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.615 [INFO][5956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.615 [INFO][5956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.620 [WARNING][5956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.620 [INFO][5956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.622 [INFO][5956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:25.626710 containerd[1620]: 2025-09-13 00:11:25.624 [INFO][5948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:25.626710 containerd[1620]: time="2025-09-13T00:11:25.626092836Z" level=info msg="TearDown network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\" successfully" Sep 13 00:11:25.626710 containerd[1620]: time="2025-09-13T00:11:25.626130496Z" level=info msg="StopPodSandbox for \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\" returns successfully" Sep 13 00:11:25.652705 containerd[1620]: time="2025-09-13T00:11:25.652464815Z" level=info msg="RemovePodSandbox for \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\"" Sep 13 00:11:25.652705 containerd[1620]: time="2025-09-13T00:11:25.652496394Z" level=info msg="Forcibly stopping sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\"" Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.689 [WARNING][5971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ea30c4ae-16d9-4c82-be5e-0de5a9b6d8dc", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"546201fc5932b8616af6d09045f84d3e86b990ea819d1f35dfd3856635dca719", Pod:"coredns-7c65d6cfc9-2m6jh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75f4e169236", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.689 [INFO][5971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.689 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" iface="eth0" netns="" Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.689 [INFO][5971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.689 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.718 [INFO][5978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.719 [INFO][5978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.719 [INFO][5978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.726 [WARNING][5978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.726 [INFO][5978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" HandleID="k8s-pod-network.30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Workload="ci--4081--3--5--n--662926fb9e-k8s-coredns--7c65d6cfc9--2m6jh-eth0" Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.728 [INFO][5978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:25.732868 containerd[1620]: 2025-09-13 00:11:25.730 [INFO][5971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0" Sep 13 00:11:25.732868 containerd[1620]: time="2025-09-13T00:11:25.732844601Z" level=info msg="TearDown network for sandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\" successfully" Sep 13 00:11:25.811916 containerd[1620]: time="2025-09-13T00:11:25.811859521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:25.812028 containerd[1620]: time="2025-09-13T00:11:25.811946815Z" level=info msg="RemovePodSandbox \"30ae70612f911284e4eb12fa390569d176c16686de0b23c5b9c3df1d82d244a0\" returns successfully" Sep 13 00:11:25.812417 containerd[1620]: time="2025-09-13T00:11:25.812375474Z" level=info msg="StopPodSandbox for \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\"" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.843 [WARNING][5993] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.843 [INFO][5993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.843 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" iface="eth0" netns="" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.843 [INFO][5993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.843 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.864 [INFO][6001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.864 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.864 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.869 [WARNING][6001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.869 [INFO][6001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.871 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:25.875299 containerd[1620]: 2025-09-13 00:11:25.873 [INFO][5993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:25.875299 containerd[1620]: time="2025-09-13T00:11:25.875164977Z" level=info msg="TearDown network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\" successfully" Sep 13 00:11:25.875299 containerd[1620]: time="2025-09-13T00:11:25.875189963Z" level=info msg="StopPodSandbox for \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\" returns successfully" Sep 13 00:11:25.877726 containerd[1620]: time="2025-09-13T00:11:25.875752763Z" level=info msg="RemovePodSandbox for \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\"" Sep 13 00:11:25.877726 containerd[1620]: time="2025-09-13T00:11:25.875782909Z" level=info msg="Forcibly stopping sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\"" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.906 [WARNING][6015] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.906 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.906 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" iface="eth0" netns="" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.906 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.906 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.924 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.924 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.924 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.929 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.929 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" HandleID="k8s-pod-network.c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Workload="ci--4081--3--5--n--662926fb9e-k8s-whisker--7f8489888f--4s4v8-eth0" Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.931 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:25.935360 containerd[1620]: 2025-09-13 00:11:25.932 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059" Sep 13 00:11:25.935360 containerd[1620]: time="2025-09-13T00:11:25.934293101Z" level=info msg="TearDown network for sandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\" successfully" Sep 13 00:11:25.937601 containerd[1620]: time="2025-09-13T00:11:25.937562388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:25.937642 containerd[1620]: time="2025-09-13T00:11:25.937617360Z" level=info msg="RemovePodSandbox \"c1c391b6a30422fb67eec69e790ab4c0ec59502e9e83f916dc354c7c273f5059\" returns successfully" Sep 13 00:11:25.938037 containerd[1620]: time="2025-09-13T00:11:25.938014250Z" level=info msg="StopPodSandbox for \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\"" Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:25.968 [WARNING][6036] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0", GenerateName:"calico-kube-controllers-6dc566c86b-", Namespace:"calico-system", SelfLink:"", UID:"31e963cb-458c-437d-b290-9bfcbbbfa753", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc566c86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d", Pod:"calico-kube-controllers-6dc566c86b-xkhhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51906b91a83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:25.968 [INFO][6036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:25.968 [INFO][6036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" iface="eth0" netns="" Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:25.968 [INFO][6036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:25.968 [INFO][6036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:25.997 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:25.997 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:25.997 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:26.006 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:26.007 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:26.008 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.019076 containerd[1620]: 2025-09-13 00:11:26.012 [INFO][6036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:26.019076 containerd[1620]: time="2025-09-13T00:11:26.018583504Z" level=info msg="TearDown network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\" successfully" Sep 13 00:11:26.019076 containerd[1620]: time="2025-09-13T00:11:26.018606668Z" level=info msg="StopPodSandbox for \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\" returns successfully" Sep 13 00:11:26.019076 containerd[1620]: time="2025-09-13T00:11:26.019075322Z" level=info msg="RemovePodSandbox for \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\"" Sep 13 00:11:26.022339 containerd[1620]: time="2025-09-13T00:11:26.019097984Z" level=info msg="Forcibly stopping sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\"" Sep 13 00:11:26.072155 kubelet[2755]: I0913 00:11:26.058190 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2rrtz" podStartSLOduration=28.751130099 podStartE2EDuration="44.056706849s" podCreationTimestamp="2025-09-13 00:10:42 +0000 UTC" firstStartedPulling="2025-09-13 00:11:09.75323779 +0000 UTC m=+44.745347106" lastFinishedPulling="2025-09-13 00:11:25.058814541 +0000 UTC m=+60.050923856" observedRunningTime="2025-09-13 00:11:26.056087013 +0000 UTC m=+61.048196329" watchObservedRunningTime="2025-09-13 00:11:26.056706849 +0000 UTC m=+61.048816165" Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.076 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0", GenerateName:"calico-kube-controllers-6dc566c86b-", Namespace:"calico-system", SelfLink:"", UID:"31e963cb-458c-437d-b290-9bfcbbbfa753", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc566c86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"3b8d05fde61e57823330c29acbd280cb4f02f95dbced59aa915b2befdcfa609d", Pod:"calico-kube-controllers-6dc566c86b-xkhhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51906b91a83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.076 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.076 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" iface="eth0" netns="" Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.076 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.076 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.093 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.093 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.093 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.100 [WARNING][6064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.100 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" HandleID="k8s-pod-network.d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--kube--controllers--6dc566c86b--xkhhq-eth0" Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.101 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.105238 containerd[1620]: 2025-09-13 00:11:26.103 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db" Sep 13 00:11:26.106595 containerd[1620]: time="2025-09-13T00:11:26.105377024Z" level=info msg="TearDown network for sandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\" successfully" Sep 13 00:11:26.114202 containerd[1620]: time="2025-09-13T00:11:26.114126904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:26.114300 containerd[1620]: time="2025-09-13T00:11:26.114225989Z" level=info msg="RemovePodSandbox \"d047b558a6de295852a3f644a108562eb56a5fa0ead6e35929280c3c78b5f9db\" returns successfully" Sep 13 00:11:26.114798 containerd[1620]: time="2025-09-13T00:11:26.114777186Z" level=info msg="StopPodSandbox for \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\"" Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.155 [WARNING][6079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0", GenerateName:"calico-apiserver-bcbcd6df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1724762-f3a5-4a7f-9c75-353a81a041e5", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcbcd6df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e", Pod:"calico-apiserver-bcbcd6df9-pltxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36be0794c39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.156 [INFO][6079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.156 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" iface="eth0" netns="" Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.156 [INFO][6079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.156 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.179 [INFO][6087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.179 [INFO][6087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.179 [INFO][6087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.184 [WARNING][6087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.184 [INFO][6087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.186 [INFO][6087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.192294 containerd[1620]: 2025-09-13 00:11:26.190 [INFO][6079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:26.192294 containerd[1620]: time="2025-09-13T00:11:26.192266102Z" level=info msg="TearDown network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\" successfully" Sep 13 00:11:26.192294 containerd[1620]: time="2025-09-13T00:11:26.192287421Z" level=info msg="StopPodSandbox for \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\" returns successfully" Sep 13 00:11:26.197591 containerd[1620]: time="2025-09-13T00:11:26.193393825Z" level=info msg="RemovePodSandbox for \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\"" Sep 13 00:11:26.197591 containerd[1620]: time="2025-09-13T00:11:26.193418270Z" level=info msg="Forcibly stopping sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\"" Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.231 [WARNING][6101] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0", GenerateName:"calico-apiserver-bcbcd6df9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1724762-f3a5-4a7f-9c75-353a81a041e5", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcbcd6df9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e", Pod:"calico-apiserver-bcbcd6df9-pltxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36be0794c39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.232 [INFO][6101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.232 [INFO][6101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" iface="eth0" netns="" Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.232 [INFO][6101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.232 [INFO][6101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.253 [INFO][6109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.254 [INFO][6109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.254 [INFO][6109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.260 [WARNING][6109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.260 [INFO][6109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" HandleID="k8s-pod-network.fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.261 [INFO][6109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.268994 containerd[1620]: 2025-09-13 00:11:26.267 [INFO][6101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad" Sep 13 00:11:26.269976 containerd[1620]: time="2025-09-13T00:11:26.269461960Z" level=info msg="TearDown network for sandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\" successfully" Sep 13 00:11:26.275781 containerd[1620]: time="2025-09-13T00:11:26.275634934Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:26.275781 containerd[1620]: time="2025-09-13T00:11:26.275692301Z" level=info msg="RemovePodSandbox \"fe5583861fc8dc052b13006d3aa53248aa6853e4395b295330d2f53fae3277ad\" returns successfully" Sep 13 00:11:26.276440 containerd[1620]: time="2025-09-13T00:11:26.276149083Z" level=info msg="StopPodSandbox for \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\"" Sep 13 00:11:26.343065 kubelet[2755]: I0913 00:11:26.343036 2755 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.309 [WARNING][6123] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.309 [INFO][6123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.309 [INFO][6123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" iface="eth0" netns="" Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.309 [INFO][6123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.309 [INFO][6123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.330 [INFO][6131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.330 [INFO][6131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.331 [INFO][6131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.338 [WARNING][6131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.338 [INFO][6131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.340 [INFO][6131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.344448 containerd[1620]: 2025-09-13 00:11:26.342 [INFO][6123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:26.347265 containerd[1620]: time="2025-09-13T00:11:26.344484506Z" level=info msg="TearDown network for sandbox \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\" successfully" Sep 13 00:11:26.347265 containerd[1620]: time="2025-09-13T00:11:26.344505274Z" level=info msg="StopPodSandbox for \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\" returns successfully" Sep 13 00:11:26.347265 containerd[1620]: time="2025-09-13T00:11:26.344884472Z" level=info msg="RemovePodSandbox for \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\"" Sep 13 00:11:26.347265 containerd[1620]: time="2025-09-13T00:11:26.344902846Z" level=info msg="Forcibly stopping sandbox \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\"" Sep 13 00:11:26.347860 kubelet[2755]: I0913 00:11:26.345118 2755 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.403 [WARNING][6145] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.404 [INFO][6145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.404 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" iface="eth0" netns="" Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.404 [INFO][6145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.404 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.424 [INFO][6152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.424 [INFO][6152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.424 [INFO][6152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.428 [WARNING][6152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.429 [INFO][6152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" HandleID="k8s-pod-network.ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.430 [INFO][6152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.433750 containerd[1620]: 2025-09-13 00:11:26.431 [INFO][6145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f" Sep 13 00:11:26.434514 containerd[1620]: time="2025-09-13T00:11:26.433763441Z" level=info msg="TearDown network for sandbox \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\" successfully" Sep 13 00:11:26.437916 containerd[1620]: time="2025-09-13T00:11:26.437886951Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:26.438027 containerd[1620]: time="2025-09-13T00:11:26.437940081Z" level=info msg="RemovePodSandbox \"ba9fc4ed406f41e55753f85bd9fdf528ac28efd22dae43b080ded2f5cc42810f\" returns successfully" Sep 13 00:11:26.438744 containerd[1620]: time="2025-09-13T00:11:26.438692303Z" level=info msg="StopPodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\"" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.466 [WARNING][6167] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.466 [INFO][6167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.466 [INFO][6167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" iface="eth0" netns="" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.466 [INFO][6167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.467 [INFO][6167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.486 [INFO][6174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.486 [INFO][6174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.486 [INFO][6174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.491 [WARNING][6174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.491 [INFO][6174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.493 [INFO][6174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.496754 containerd[1620]: 2025-09-13 00:11:26.494 [INFO][6167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:26.496754 containerd[1620]: time="2025-09-13T00:11:26.496564762Z" level=info msg="TearDown network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" successfully" Sep 13 00:11:26.496754 containerd[1620]: time="2025-09-13T00:11:26.496586744Z" level=info msg="StopPodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" returns successfully" Sep 13 00:11:26.498788 containerd[1620]: time="2025-09-13T00:11:26.497448981Z" level=info msg="RemovePodSandbox for \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\"" Sep 13 00:11:26.498788 containerd[1620]: time="2025-09-13T00:11:26.497471172Z" level=info msg="Forcibly stopping sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\"" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.529 [WARNING][6188] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.531 [INFO][6188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.531 [INFO][6188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" iface="eth0" netns="" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.531 [INFO][6188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.531 [INFO][6188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.551 [INFO][6195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.552 [INFO][6195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.552 [INFO][6195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.558 [WARNING][6195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.558 [INFO][6195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" HandleID="k8s-pod-network.87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--cwbjt-eth0" Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.560 [INFO][6195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.564361 containerd[1620]: 2025-09-13 00:11:26.562 [INFO][6188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2" Sep 13 00:11:26.564361 containerd[1620]: time="2025-09-13T00:11:26.564161859Z" level=info msg="TearDown network for sandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" successfully" Sep 13 00:11:26.569256 containerd[1620]: time="2025-09-13T00:11:26.569215995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:26.569327 containerd[1620]: time="2025-09-13T00:11:26.569278192Z" level=info msg="RemovePodSandbox \"87e841b9b2a4e4aafb40045c6e9334d3874222f2320274545e5dcb9fe42555e2\" returns successfully" Sep 13 00:11:26.569756 containerd[1620]: time="2025-09-13T00:11:26.569731746Z" level=info msg="StopPodSandbox for \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\"" Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.597 [WARNING][6209] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"33f7161a-ca41-4c6b-95d5-d5f552f3a553", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38", Pod:"goldmane-7988f88666-q6rw4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali252f60d1474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.597 [INFO][6209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.597 [INFO][6209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" iface="eth0" netns="" Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.597 [INFO][6209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.597 [INFO][6209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.630 [INFO][6216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.630 [INFO][6216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.630 [INFO][6216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.635 [WARNING][6216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.635 [INFO][6216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.637 [INFO][6216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.640854 containerd[1620]: 2025-09-13 00:11:26.639 [INFO][6209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:26.641456 containerd[1620]: time="2025-09-13T00:11:26.640873023Z" level=info msg="TearDown network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\" successfully" Sep 13 00:11:26.641456 containerd[1620]: time="2025-09-13T00:11:26.640896257Z" level=info msg="StopPodSandbox for \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\" returns successfully" Sep 13 00:11:26.641456 containerd[1620]: time="2025-09-13T00:11:26.641405085Z" level=info msg="RemovePodSandbox for \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\"" Sep 13 00:11:26.641456 containerd[1620]: time="2025-09-13T00:11:26.641446623Z" level=info msg="Forcibly stopping sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\"" Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.669 [WARNING][6230] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"33f7161a-ca41-4c6b-95d5-d5f552f3a553", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"7072df19511b7031d04d8d544faef55f3f1b9ca19b333fd170a0c78ce0926b38", Pod:"goldmane-7988f88666-q6rw4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali252f60d1474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.670 [INFO][6230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.670 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" iface="eth0" netns="" Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.670 [INFO][6230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.670 [INFO][6230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.688 [INFO][6238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.688 [INFO][6238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.688 [INFO][6238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.694 [WARNING][6238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.694 [INFO][6238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" HandleID="k8s-pod-network.2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Workload="ci--4081--3--5--n--662926fb9e-k8s-goldmane--7988f88666--q6rw4-eth0" Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.696 [INFO][6238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.699607 containerd[1620]: 2025-09-13 00:11:26.697 [INFO][6230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072" Sep 13 00:11:26.701483 containerd[1620]: time="2025-09-13T00:11:26.699733625Z" level=info msg="TearDown network for sandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\" successfully" Sep 13 00:11:26.703590 containerd[1620]: time="2025-09-13T00:11:26.703546486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:26.703639 containerd[1620]: time="2025-09-13T00:11:26.703600466Z" level=info msg="RemovePodSandbox \"2e450f38e038612bf8c500ba0355369cf9126d3479c95938f5b9c46cd2026072\" returns successfully" Sep 13 00:11:26.704090 containerd[1620]: time="2025-09-13T00:11:26.704068310Z" level=info msg="StopPodSandbox for \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\"" Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.731 [WARNING][6252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0", GenerateName:"calico-apiserver-748c7ccd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc241816-fb81-4b70-a8db-a4aa35a35261", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c7ccd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728", Pod:"calico-apiserver-748c7ccd65-nl8pm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16608ba41f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.731 [INFO][6252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.731 [INFO][6252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" iface="eth0" netns="" Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.731 [INFO][6252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.732 [INFO][6252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.750 [INFO][6259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.750 [INFO][6259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.750 [INFO][6259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.756 [WARNING][6259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.756 [INFO][6259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.757 [INFO][6259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.761391 containerd[1620]: 2025-09-13 00:11:26.759 [INFO][6252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:26.761391 containerd[1620]: time="2025-09-13T00:11:26.761336411Z" level=info msg="TearDown network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\" successfully" Sep 13 00:11:26.761391 containerd[1620]: time="2025-09-13T00:11:26.761362250Z" level=info msg="StopPodSandbox for \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\" returns successfully" Sep 13 00:11:26.763096 containerd[1620]: time="2025-09-13T00:11:26.761732169Z" level=info msg="RemovePodSandbox for \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\"" Sep 13 00:11:26.763096 containerd[1620]: time="2025-09-13T00:11:26.761760452Z" level=info msg="Forcibly stopping sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\"" Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.790 [WARNING][6273] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0", GenerateName:"calico-apiserver-748c7ccd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc241816-fb81-4b70-a8db-a4aa35a35261", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c7ccd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"a34e96249a4c56ffd61f572a48d86d08ad136cf4ef8d3ac9b893749d0de0b728", Pod:"calico-apiserver-748c7ccd65-nl8pm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16608ba41f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.790 [INFO][6273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.790 [INFO][6273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" iface="eth0" netns="" Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.790 [INFO][6273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.790 [INFO][6273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.807 [INFO][6280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.807 [INFO][6280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.808 [INFO][6280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.812 [WARNING][6280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.812 [INFO][6280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" HandleID="k8s-pod-network.1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--748c7ccd65--nl8pm-eth0" Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.814 [INFO][6280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.817810 containerd[1620]: 2025-09-13 00:11:26.815 [INFO][6273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c" Sep 13 00:11:26.817810 containerd[1620]: time="2025-09-13T00:11:26.817782399Z" level=info msg="TearDown network for sandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\" successfully" Sep 13 00:11:26.824234 containerd[1620]: time="2025-09-13T00:11:26.824201551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:26.824331 containerd[1620]: time="2025-09-13T00:11:26.824259299Z" level=info msg="RemovePodSandbox \"1fac64f2becc458607bc942e5759e212fb5e27687930b3ec6fc19700a7087a9c\" returns successfully" Sep 13 00:11:26.824784 containerd[1620]: time="2025-09-13T00:11:26.824746067Z" level=info msg="StopPodSandbox for \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\"" Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.858 [WARNING][6295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f", Pod:"csi-node-driver-2rrtz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b0ffc4bff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.859 [INFO][6295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.859 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" iface="eth0" netns="" Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.859 [INFO][6295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.859 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.882 [INFO][6302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.882 [INFO][6302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.882 [INFO][6302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.888 [WARNING][6302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.888 [INFO][6302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.889 [INFO][6302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.893700 containerd[1620]: 2025-09-13 00:11:26.891 [INFO][6295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:26.893700 containerd[1620]: time="2025-09-13T00:11:26.893564971Z" level=info msg="TearDown network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\" successfully" Sep 13 00:11:26.893700 containerd[1620]: time="2025-09-13T00:11:26.893586823Z" level=info msg="StopPodSandbox for \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\" returns successfully" Sep 13 00:11:26.894843 containerd[1620]: time="2025-09-13T00:11:26.894206468Z" level=info msg="RemovePodSandbox for \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\"" Sep 13 00:11:26.894843 containerd[1620]: time="2025-09-13T00:11:26.894237346Z" level=info msg="Forcibly stopping sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\"" Sep 13 00:11:26.901094 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:11:26.895627 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:11:26.895658 systemd-resolved[1510]: Flushed all caches. Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.929 [WARNING][6316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3e2dd97-1ae5-4404-ba4d-d8147ba3acd2", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-662926fb9e", ContainerID:"62ab556a10ad86eeed3de4a4c8a085f83aa459fc6c3ec967de3cd5ebf959a23f", Pod:"csi-node-driver-2rrtz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b0ffc4bff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.929 [INFO][6316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.929 [INFO][6316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" iface="eth0" netns="" Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.930 [INFO][6316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.930 [INFO][6316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.948 [INFO][6323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.948 [INFO][6323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.948 [INFO][6323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.955 [WARNING][6323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.955 [INFO][6323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" HandleID="k8s-pod-network.515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Workload="ci--4081--3--5--n--662926fb9e-k8s-csi--node--driver--2rrtz-eth0" Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.956 [INFO][6323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:26.962296 containerd[1620]: 2025-09-13 00:11:26.958 [INFO][6316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7" Sep 13 00:11:26.962296 containerd[1620]: time="2025-09-13T00:11:26.961074324Z" level=info msg="TearDown network for sandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\" successfully" Sep 13 00:11:26.964814 containerd[1620]: time="2025-09-13T00:11:26.964780127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:26.964920 containerd[1620]: time="2025-09-13T00:11:26.964842693Z" level=info msg="RemovePodSandbox \"515493e893bd04bfd9b52f8d054bb84ccec476056ec110f4209f6b7010fdc7d7\" returns successfully" Sep 13 00:11:28.945863 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:11:28.943786 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:11:28.943796 systemd-resolved[1510]: Flushed all caches. Sep 13 00:11:35.252181 systemd[1]: run-containerd-runc-k8s.io-d41126440da7ae9dcdc487e20307e55ec06e3f097176feae3c3eff6f796b5650-runc.EZ9Mxh.mount: Deactivated successfully. Sep 13 00:11:36.450095 kubelet[2755]: I0913 00:11:36.449975 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:36.589960 containerd[1620]: time="2025-09-13T00:11:36.589085257Z" level=info msg="StopContainer for \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\" with timeout 30 (s)" Sep 13 00:11:36.591846 containerd[1620]: time="2025-09-13T00:11:36.591583823Z" level=info msg="Stop container \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\" with signal terminated" Sep 13 00:11:36.684504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98-rootfs.mount: Deactivated successfully. Sep 13 00:11:36.710488 containerd[1620]: time="2025-09-13T00:11:36.687912162Z" level=info msg="shim disconnected" id=37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98 namespace=k8s.io Sep 13 00:11:36.718385 containerd[1620]: time="2025-09-13T00:11:36.718334412Z" level=warning msg="cleaning up after shim disconnected" id=37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98 namespace=k8s.io Sep 13 00:11:36.718385 containerd[1620]: time="2025-09-13T00:11:36.718372684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:36.755893 containerd[1620]: time="2025-09-13T00:11:36.755851661Z" level=info msg="StopContainer for \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\" returns successfully" Sep 13 00:11:36.756249 containerd[1620]: time="2025-09-13T00:11:36.756215019Z" level=info msg="StopPodSandbox for \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\"" Sep 13 00:11:36.759044 containerd[1620]: time="2025-09-13T00:11:36.759012794Z" level=info msg="Container to stop \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:11:36.762617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e-shm.mount: Deactivated successfully. Sep 13 00:11:36.786392 containerd[1620]: time="2025-09-13T00:11:36.786077954Z" level=info msg="shim disconnected" id=0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e namespace=k8s.io Sep 13 00:11:36.786392 containerd[1620]: time="2025-09-13T00:11:36.786159697Z" level=warning msg="cleaning up after shim disconnected" id=0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e namespace=k8s.io Sep 13 00:11:36.786392 containerd[1620]: time="2025-09-13T00:11:36.786173402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:36.789472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e-rootfs.mount: Deactivated successfully. Sep 13 00:11:36.863561 systemd-networkd[1257]: cali36be0794c39: Link DOWN Sep 13 00:11:36.863568 systemd-networkd[1257]: cali36be0794c39: Lost carrier Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.859 [INFO][6468] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.860 [INFO][6468] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" iface="eth0" netns="/var/run/netns/cni-34932d59-9329-986f-fe43-f9d28f2f964d" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.861 [INFO][6468] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" iface="eth0" netns="/var/run/netns/cni-34932d59-9329-986f-fe43-f9d28f2f964d" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.880 [INFO][6468] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" after=20.327931ms iface="eth0" netns="/var/run/netns/cni-34932d59-9329-986f-fe43-f9d28f2f964d" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.880 [INFO][6468] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.880 [INFO][6468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.913 [INFO][6478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.913 [INFO][6478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.914 [INFO][6478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.957 [INFO][6478] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.958 [INFO][6478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.959 [INFO][6478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:36.965474 containerd[1620]: 2025-09-13 00:11:36.962 [INFO][6468] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:11:36.968136 containerd[1620]: time="2025-09-13T00:11:36.966457639Z" level=info msg="TearDown network for sandbox \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\" successfully" Sep 13 00:11:36.968136 containerd[1620]: time="2025-09-13T00:11:36.966493757Z" level=info msg="StopPodSandbox for \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\" returns successfully" Sep 13 00:11:36.971220 systemd[1]: run-netns-cni\x2d34932d59\x2d9329\x2d986f\x2dfe43\x2df9d28f2f964d.mount: Deactivated successfully. Sep 13 00:11:37.067347 kubelet[2755]: I0913 00:11:37.066953 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1724762-f3a5-4a7f-9c75-353a81a041e5-calico-apiserver-certs\") pod \"e1724762-f3a5-4a7f-9c75-353a81a041e5\" (UID: \"e1724762-f3a5-4a7f-9c75-353a81a041e5\") " Sep 13 00:11:37.067347 kubelet[2755]: I0913 00:11:37.067033 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w9fb\" (UniqueName: \"kubernetes.io/projected/e1724762-f3a5-4a7f-9c75-353a81a041e5-kube-api-access-7w9fb\") pod \"e1724762-f3a5-4a7f-9c75-353a81a041e5\" (UID: \"e1724762-f3a5-4a7f-9c75-353a81a041e5\") " Sep 13 00:11:37.092893 kubelet[2755]: I0913 00:11:37.092843 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1724762-f3a5-4a7f-9c75-353a81a041e5-kube-api-access-7w9fb" (OuterVolumeSpecName: "kube-api-access-7w9fb") pod "e1724762-f3a5-4a7f-9c75-353a81a041e5" (UID: "e1724762-f3a5-4a7f-9c75-353a81a041e5"). InnerVolumeSpecName "kube-api-access-7w9fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:11:37.092999 kubelet[2755]: I0913 00:11:37.092965 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1724762-f3a5-4a7f-9c75-353a81a041e5-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e1724762-f3a5-4a7f-9c75-353a81a041e5" (UID: "e1724762-f3a5-4a7f-9c75-353a81a041e5"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:11:37.098356 systemd[1]: var-lib-kubelet-pods-e1724762\x2df3a5\x2d4a7f\x2d9c75\x2d353a81a041e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7w9fb.mount: Deactivated successfully. Sep 13 00:11:37.098510 systemd[1]: var-lib-kubelet-pods-e1724762\x2df3a5\x2d4a7f\x2d9c75\x2d353a81a041e5-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:11:37.103594 kubelet[2755]: I0913 00:11:37.103455 2755 scope.go:117] "RemoveContainer" containerID="37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98" Sep 13 00:11:37.122731 containerd[1620]: time="2025-09-13T00:11:37.122687480Z" level=info msg="RemoveContainer for \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\"" Sep 13 00:11:37.127521 containerd[1620]: time="2025-09-13T00:11:37.126990367Z" level=info msg="RemoveContainer for \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\" returns successfully" Sep 13 00:11:37.127809 kubelet[2755]: I0913 00:11:37.127790 2755 scope.go:117] "RemoveContainer" containerID="37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98" Sep 13 00:11:37.135402 kubelet[2755]: I0913 00:11:37.135373 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1724762-f3a5-4a7f-9c75-353a81a041e5" path="/var/lib/kubelet/pods/e1724762-f3a5-4a7f-9c75-353a81a041e5/volumes" Sep 13 00:11:37.141476 containerd[1620]: time="2025-09-13T00:11:37.135106333Z" level=error msg="ContainerStatus for \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\": not found" Sep 13 00:11:37.162031 kubelet[2755]: E0913 00:11:37.161994 2755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\": not found" containerID="37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98" Sep 13 00:11:37.162282 kubelet[2755]: I0913 00:11:37.162038 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98"} err="failed to get container status \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\": rpc error: code = NotFound desc = an error occurred when try to find container \"37499d1400250dc25ddef87b16c9cc8e7c581e77791b6dc634865712f31e7e98\": not found" Sep 13 00:11:37.167668 kubelet[2755]: I0913 00:11:37.167386 2755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w9fb\" (UniqueName: \"kubernetes.io/projected/e1724762-f3a5-4a7f-9c75-353a81a041e5-kube-api-access-7w9fb\") on node \"ci-4081-3-5-n-662926fb9e\" DevicePath \"\"" Sep 13 00:11:37.167668 kubelet[2755]: I0913 00:11:37.167407 2755 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1724762-f3a5-4a7f-9c75-353a81a041e5-calico-apiserver-certs\") on node \"ci-4081-3-5-n-662926fb9e\" DevicePath \"\"" Sep 13 00:11:38.865590 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:11:38.863437 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:11:38.863447 systemd-resolved[1510]: Flushed all caches. Sep 13 00:12:04.713228 systemd[1]: run-containerd-runc-k8s.io-7bf40bee7b6bfbc7575e9d039a7914541a59826838e2e70e0f3dbad4e2bf4bb6-runc.U1k2bB.mount: Deactivated successfully. Sep 13 00:12:10.082904 systemd[1]: run-containerd-runc-k8s.io-7b378223794c6edb5e476fac8d0f2967ad4236b7681e1393c036afafb349a8cf-runc.Kqws3L.mount: Deactivated successfully. Sep 13 00:12:26.999927 containerd[1620]: time="2025-09-13T00:12:26.994071630Z" level=info msg="StopPodSandbox for \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\"" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.175 [WARNING][6632] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.176 [INFO][6632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.176 [INFO][6632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" iface="eth0" netns="" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.176 [INFO][6632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.176 [INFO][6632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.347 [INFO][6639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.350 [INFO][6639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.350 [INFO][6639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.366 [WARNING][6639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.366 [INFO][6639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.368 [INFO][6639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:27.372871 containerd[1620]: 2025-09-13 00:12:27.370 [INFO][6632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:12:27.376827 containerd[1620]: time="2025-09-13T00:12:27.376775759Z" level=info msg="TearDown network for sandbox \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\" successfully" Sep 13 00:12:27.376901 containerd[1620]: time="2025-09-13T00:12:27.376826034Z" level=info msg="StopPodSandbox for \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\" returns successfully" Sep 13 00:12:27.390163 containerd[1620]: time="2025-09-13T00:12:27.389883009Z" level=info msg="RemovePodSandbox for \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\"" Sep 13 00:12:27.397092 containerd[1620]: time="2025-09-13T00:12:27.397044266Z" level=info msg="Forcibly stopping sandbox \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\"" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.432 [WARNING][6654] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" WorkloadEndpoint="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.433 [INFO][6654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.433 [INFO][6654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" iface="eth0" netns="" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.433 [INFO][6654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.433 [INFO][6654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.452 [INFO][6661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.453 [INFO][6661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.453 [INFO][6661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.459 [WARNING][6661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.459 [INFO][6661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" HandleID="k8s-pod-network.0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Workload="ci--4081--3--5--n--662926fb9e-k8s-calico--apiserver--bcbcd6df9--pltxz-eth0" Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.461 [INFO][6661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:27.467390 containerd[1620]: 2025-09-13 00:12:27.464 [INFO][6654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e" Sep 13 00:12:27.469670 containerd[1620]: time="2025-09-13T00:12:27.467454739Z" level=info msg="TearDown network for sandbox \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\" successfully" Sep 13 00:12:27.539176 containerd[1620]: time="2025-09-13T00:12:27.539100886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:27.539827 containerd[1620]: time="2025-09-13T00:12:27.539197016Z" level=info msg="RemovePodSandbox \"0a07597a071877e425ec11909fbb6219bb4d84f9ef6c49ac036a5d68ed3f328e\" returns successfully" Sep 13 00:13:00.630732 systemd[1]: Started sshd@8-65.21.60.153:22-147.75.109.163:41710.service - OpenSSH per-connection server daemon (147.75.109.163:41710). Sep 13 00:13:01.692448 sshd[6775]: Accepted publickey for core from 147.75.109.163 port 41710 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:01.695460 sshd[6775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:01.714767 systemd-logind[1604]: New session 8 of user core. Sep 13 00:13:01.725549 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:13:02.961600 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:13:02.966124 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:13:02.966132 systemd-resolved[1510]: Flushed all caches. Sep 13 00:13:03.087234 sshd[6775]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:03.100091 systemd[1]: sshd@8-65.21.60.153:22-147.75.109.163:41710.service: Deactivated successfully. Sep 13 00:13:03.111334 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:13:03.112066 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:13:03.115335 systemd-logind[1604]: Removed session 8. Sep 13 00:13:08.253530 systemd[1]: Started sshd@9-65.21.60.153:22-147.75.109.163:41712.service - OpenSSH per-connection server daemon (147.75.109.163:41712). Sep 13 00:13:09.267183 sshd[6833]: Accepted publickey for core from 147.75.109.163 port 41712 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:09.268969 sshd[6833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:09.273974 systemd-logind[1604]: New session 9 of user core. Sep 13 00:13:09.277589 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:13:10.267034 sshd[6833]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:10.271820 systemd[1]: sshd@9-65.21.60.153:22-147.75.109.163:41712.service: Deactivated successfully. Sep 13 00:13:10.272654 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:13:10.274826 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:13:10.276958 systemd-logind[1604]: Removed session 9. Sep 13 00:13:10.432677 systemd[1]: Started sshd@10-65.21.60.153:22-147.75.109.163:53050.service - OpenSSH per-connection server daemon (147.75.109.163:53050). Sep 13 00:13:11.401336 sshd[6868]: Accepted publickey for core from 147.75.109.163 port 53050 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:11.402858 sshd[6868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:11.407392 systemd-logind[1604]: New session 10 of user core. Sep 13 00:13:11.412724 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:13:12.199335 sshd[6868]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:12.208621 systemd[1]: sshd@10-65.21.60.153:22-147.75.109.163:53050.service: Deactivated successfully. Sep 13 00:13:12.212929 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:13:12.213837 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:13:12.217951 systemd-logind[1604]: Removed session 10. Sep 13 00:13:12.360553 systemd[1]: Started sshd@11-65.21.60.153:22-147.75.109.163:53064.service - OpenSSH per-connection server daemon (147.75.109.163:53064). Sep 13 00:13:13.332680 sshd[6880]: Accepted publickey for core from 147.75.109.163 port 53064 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:13.334451 sshd[6880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:13.339364 systemd-logind[1604]: New session 11 of user core. Sep 13 00:13:13.343569 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:13:14.100667 sshd[6880]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:14.106580 systemd[1]: sshd@11-65.21.60.153:22-147.75.109.163:53064.service: Deactivated successfully. Sep 13 00:13:14.115485 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:13:14.116170 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:13:14.118536 systemd-logind[1604]: Removed session 11. Sep 13 00:13:17.851546 systemd[1]: Started sshd@12-65.21.60.153:22-181.188.159.138:53766.service - OpenSSH per-connection server daemon (181.188.159.138:53766). Sep 13 00:13:19.266072 systemd[1]: Started sshd@13-65.21.60.153:22-147.75.109.163:53080.service - OpenSSH per-connection server daemon (147.75.109.163:53080). Sep 13 00:13:19.402696 sshd[6899]: Received disconnect from 181.188.159.138 port 53766:11: Bye Bye [preauth] Sep 13 00:13:19.402696 sshd[6899]: Disconnected from authenticating user root 181.188.159.138 port 53766 [preauth] Sep 13 00:13:19.404725 systemd[1]: sshd@12-65.21.60.153:22-181.188.159.138:53766.service: Deactivated successfully. Sep 13 00:13:20.243796 sshd[6901]: Accepted publickey for core from 147.75.109.163 port 53080 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:20.245619 sshd[6901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:20.251964 systemd-logind[1604]: New session 12 of user core. Sep 13 00:13:20.261550 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:13:21.087153 sshd[6901]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:21.094266 systemd[1]: sshd@13-65.21.60.153:22-147.75.109.163:53080.service: Deactivated successfully. Sep 13 00:13:21.098300 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:13:21.098431 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:13:21.100594 systemd-logind[1604]: Removed session 12. Sep 13 00:13:26.246583 systemd[1]: Started sshd@14-65.21.60.153:22-147.75.109.163:35072.service - OpenSSH per-connection server daemon (147.75.109.163:35072). Sep 13 00:13:27.247361 sshd[6941]: Accepted publickey for core from 147.75.109.163 port 35072 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:27.250540 sshd[6941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:27.256547 systemd-logind[1604]: New session 13 of user core. Sep 13 00:13:27.259588 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:13:28.009500 sshd[6941]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:28.014855 systemd[1]: sshd@14-65.21.60.153:22-147.75.109.163:35072.service: Deactivated successfully. Sep 13 00:13:28.025670 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:13:28.027606 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:13:28.030595 systemd-logind[1604]: Removed session 13. Sep 13 00:13:33.050361 systemd[1]: run-containerd-runc-k8s.io-d41126440da7ae9dcdc487e20307e55ec06e3f097176feae3c3eff6f796b5650-runc.tmOPsZ.mount: Deactivated successfully. Sep 13 00:13:33.169958 systemd[1]: Started sshd@15-65.21.60.153:22-147.75.109.163:35220.service - OpenSSH per-connection server daemon (147.75.109.163:35220). Sep 13 00:13:34.167332 sshd[6976]: Accepted publickey for core from 147.75.109.163 port 35220 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:34.167954 sshd[6976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:34.178537 systemd-logind[1604]: New session 14 of user core. Sep 13 00:13:34.184761 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:13:35.275120 systemd[1]: run-containerd-runc-k8s.io-d41126440da7ae9dcdc487e20307e55ec06e3f097176feae3c3eff6f796b5650-runc.K2jKA9.mount: Deactivated successfully. Sep 13 00:13:35.368903 sshd[6976]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:35.390819 systemd[1]: sshd@15-65.21.60.153:22-147.75.109.163:35220.service: Deactivated successfully. Sep 13 00:13:35.400168 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:13:35.400443 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:13:35.402722 systemd-logind[1604]: Removed session 14. Sep 13 00:13:35.522553 systemd[1]: Started sshd@16-65.21.60.153:22-147.75.109.163:35236.service - OpenSSH per-connection server daemon (147.75.109.163:35236). Sep 13 00:13:36.486350 sshd[7032]: Accepted publickey for core from 147.75.109.163 port 35236 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:36.487980 sshd[7032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:36.493560 systemd-logind[1604]: New session 15 of user core. Sep 13 00:13:36.498534 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:13:36.945640 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:13:36.943590 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:13:36.943598 systemd-resolved[1510]: Flushed all caches. Sep 13 00:13:37.432465 sshd[7032]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:37.438225 systemd[1]: sshd@16-65.21.60.153:22-147.75.109.163:35236.service: Deactivated successfully. Sep 13 00:13:37.443475 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:13:37.444780 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:13:37.447049 systemd-logind[1604]: Removed session 15. Sep 13 00:13:37.628664 systemd[1]: Started sshd@17-65.21.60.153:22-147.75.109.163:35250.service - OpenSSH per-connection server daemon (147.75.109.163:35250). Sep 13 00:13:38.727763 sshd[7043]: Accepted publickey for core from 147.75.109.163 port 35250 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:38.729621 sshd[7043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:38.735023 systemd-logind[1604]: New session 16 of user core. Sep 13 00:13:38.740573 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:13:40.978771 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:13:40.975416 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:13:40.975423 systemd-resolved[1510]: Flushed all caches. Sep 13 00:13:41.031662 sshd[7043]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:41.055875 systemd[1]: sshd@17-65.21.60.153:22-147.75.109.163:35250.service: Deactivated successfully. Sep 13 00:13:41.060909 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:13:41.061148 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:13:41.063033 systemd-logind[1604]: Removed session 16. Sep 13 00:13:41.207533 systemd[1]: Started sshd@18-65.21.60.153:22-147.75.109.163:42118.service - OpenSSH per-connection server daemon (147.75.109.163:42118). Sep 13 00:13:42.281969 sshd[7091]: Accepted publickey for core from 147.75.109.163 port 42118 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:42.283518 sshd[7091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:42.290651 systemd-logind[1604]: New session 17 of user core. Sep 13 00:13:42.294858 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:13:43.025631 systemd-journald[1174]: Under memory pressure, flushing caches. Sep 13 00:13:43.025378 systemd-resolved[1510]: Under memory pressure, flushing caches. Sep 13 00:13:43.025384 systemd-resolved[1510]: Flushed all caches. Sep 13 00:13:43.753912 sshd[7091]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:43.764589 systemd[1]: sshd@18-65.21.60.153:22-147.75.109.163:42118.service: Deactivated successfully. Sep 13 00:13:43.770555 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:13:43.771210 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:13:43.772561 systemd-logind[1604]: Removed session 17. Sep 13 00:13:43.935532 systemd[1]: Started sshd@19-65.21.60.153:22-147.75.109.163:42134.service - OpenSSH per-connection server daemon (147.75.109.163:42134). Sep 13 00:13:45.015547 sshd[7103]: Accepted publickey for core from 147.75.109.163 port 42134 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:45.016582 sshd[7103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:45.021153 systemd-logind[1604]: New session 18 of user core. Sep 13 00:13:45.024597 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:13:46.090918 sshd[7103]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:46.101640 systemd[1]: sshd@19-65.21.60.153:22-147.75.109.163:42134.service: Deactivated successfully. Sep 13 00:13:46.104970 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:13:46.105271 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:13:46.106874 systemd-logind[1604]: Removed session 18. Sep 13 00:13:51.233532 systemd[1]: Started sshd@20-65.21.60.153:22-147.75.109.163:44896.service - OpenSSH per-connection server daemon (147.75.109.163:44896). Sep 13 00:13:52.263146 sshd[7125]: Accepted publickey for core from 147.75.109.163 port 44896 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:52.265440 sshd[7125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:52.270580 systemd-logind[1604]: New session 19 of user core. Sep 13 00:13:52.275592 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:13:53.206674 sshd[7125]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:53.212684 systemd[1]: sshd@20-65.21.60.153:22-147.75.109.163:44896.service: Deactivated successfully. Sep 13 00:13:53.214560 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:13:53.219556 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:13:53.224143 systemd-logind[1604]: Removed session 19. Sep 13 00:13:58.368890 systemd[1]: Started sshd@21-65.21.60.153:22-147.75.109.163:44912.service - OpenSSH per-connection server daemon (147.75.109.163:44912). Sep 13 00:13:59.355639 sshd[7139]: Accepted publickey for core from 147.75.109.163 port 44912 ssh2: RSA SHA256:GymMDYnosJimc4ujfdMuxEHSH4lnFIHEzFRMhgLPZDY Sep 13 00:13:59.357059 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:59.361710 systemd-logind[1604]: New session 20 of user core. Sep 13 00:13:59.368570 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:14:00.204465 sshd[7139]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:00.210343 systemd[1]: sshd@21-65.21.60.153:22-147.75.109.163:44912.service: Deactivated successfully. Sep 13 00:14:00.214382 systemd-logind[1604]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:14:00.214970 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:14:00.222404 systemd-logind[1604]: Removed session 20.