Jul 7 00:11:28.845732 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 00:11:28.845754 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:11:28.845761 kernel: BIOS-provided physical RAM map: Jul 7 00:11:28.845766 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 00:11:28.845770 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 00:11:28.845775 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 00:11:28.845780 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jul 7 00:11:28.845785 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jul 7 00:11:28.845791 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 00:11:28.845796 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 00:11:28.845800 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:11:28.845805 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 00:11:28.845809 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:11:28.845814 kernel: NX (Execute Disable) protection: active Jul 7 00:11:28.845821 kernel: APIC: Static calls initialized Jul 7 00:11:28.845826 kernel: SMBIOS 3.0.0 present. Jul 7 00:11:28.845831 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jul 7 00:11:28.845836 kernel: Hypervisor detected: KVM Jul 7 00:11:28.845841 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:11:28.845846 kernel: kvm-clock: using sched offset of 3023627721 cycles Jul 7 00:11:28.845851 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:11:28.845857 kernel: tsc: Detected 2445.406 MHz processor Jul 7 00:11:28.845862 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:11:28.845869 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:11:28.845874 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jul 7 00:11:28.845879 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 00:11:28.845884 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:11:28.845889 kernel: Using GB pages for direct mapping Jul 7 00:11:28.845894 kernel: ACPI: Early table checksum verification disabled Jul 7 00:11:28.845899 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Jul 7 00:11:28.845905 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:11:28.845910 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:11:28.845916 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:11:28.845922 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jul 7 00:11:28.845927 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:11:28.845931 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:11:28.845937 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:11:28.845942 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:11:28.845947 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Jul 7 00:11:28.845952 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Jul 7 00:11:28.845961 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jul 7 00:11:28.845966 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Jul 7 00:11:28.845971 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Jul 7 00:11:28.845977 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Jul 7 00:11:28.845982 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Jul 7 00:11:28.845987 kernel: No NUMA configuration found Jul 7 00:11:28.845993 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jul 7 00:11:28.845999 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jul 7 00:11:28.846005 kernel: Zone ranges: Jul 7 00:11:28.846010 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:11:28.846015 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jul 7 00:11:28.846021 kernel: Normal empty Jul 7 00:11:28.846026 kernel: Movable zone start for each node Jul 7 00:11:28.846031 kernel: Early memory node ranges Jul 7 00:11:28.846036 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 00:11:28.846041 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jul 7 00:11:28.846048 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jul 7 00:11:28.846053 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:11:28.846058 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 00:11:28.846064 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 7 00:11:28.846069 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:11:28.846074 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:11:28.846079 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:11:28.846085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:11:28.846090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:11:28.846096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:11:28.846102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:11:28.846107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:11:28.846112 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:11:28.846117 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:11:28.846122 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 7 00:11:28.846128 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:11:28.846133 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 00:11:28.846158 kernel: Booting paravirtualized kernel on KVM Jul 7 00:11:28.846167 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:11:28.846172 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:11:28.846177 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 7 00:11:28.846183 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 7 00:11:28.846188 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:11:28.846193 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 7 00:11:28.846200 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:11:28.846206 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:11:28.846212 kernel: random: crng init done Jul 7 00:11:28.846218 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:11:28.846223 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:11:28.846228 kernel: Fallback order for Node 0: 0 Jul 7 00:11:28.846234 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jul 7 00:11:28.846239 kernel: Policy zone: DMA32 Jul 7 00:11:28.846244 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:11:28.846250 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 125152K reserved, 0K cma-reserved) Jul 7 00:11:28.846255 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:11:28.846262 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 00:11:28.846267 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 00:11:28.846272 kernel: Dynamic Preempt: voluntary Jul 7 00:11:28.846277 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:11:28.846283 kernel: rcu: RCU event tracing is enabled. Jul 7 00:11:28.846289 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:11:28.846294 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:11:28.846300 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:11:28.846305 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:11:28.846310 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:11:28.846317 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:11:28.846322 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:11:28.846327 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:11:28.846333 kernel: Console: colour VGA+ 80x25 Jul 7 00:11:28.846338 kernel: printk: console [tty0] enabled Jul 7 00:11:28.846343 kernel: printk: console [ttyS0] enabled Jul 7 00:11:28.846349 kernel: ACPI: Core revision 20230628 Jul 7 00:11:28.846354 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 00:11:28.846359 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:11:28.846366 kernel: x2apic enabled Jul 7 00:11:28.846371 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:11:28.846376 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:11:28.846381 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 7 00:11:28.846387 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Jul 7 00:11:28.846392 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 00:11:28.846397 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 00:11:28.846403 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 00:11:28.846413 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:11:28.846419 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:11:28.846425 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:11:28.846430 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 00:11:28.846437 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 00:11:28.846443 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:11:28.846449 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:11:28.846454 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:11:28.846460 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:11:28.846478 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:11:28.846483 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:11:28.846489 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 00:11:28.846495 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:11:28.846500 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:11:28.846506 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 00:11:28.846512 kernel: landlock: Up and running. Jul 7 00:11:28.846517 kernel: SELinux: Initializing. Jul 7 00:11:28.846524 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:11:28.846530 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:11:28.846536 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 00:11:28.846541 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:11:28.846547 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:11:28.846554 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:11:28.846559 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 00:11:28.846565 kernel: ... version: 0 Jul 7 00:11:28.846570 kernel: ... bit width: 48 Jul 7 00:11:28.846577 kernel: ... generic registers: 6 Jul 7 00:11:28.846583 kernel: ... value mask: 0000ffffffffffff Jul 7 00:11:28.846589 kernel: ... max period: 00007fffffffffff Jul 7 00:11:28.846594 kernel: ... fixed-purpose events: 0 Jul 7 00:11:28.846613 kernel: ... event mask: 000000000000003f Jul 7 00:11:28.846632 kernel: signal: max sigframe size: 1776 Jul 7 00:11:28.846648 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:11:28.846663 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:11:28.846683 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:11:28.846704 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:11:28.846720 kernel: .... node #0, CPUs: #1 Jul 7 00:11:28.846736 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:11:28.846742 kernel: smpboot: Max logical packages: 1 Jul 7 00:11:28.846747 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Jul 7 00:11:28.846753 kernel: devtmpfs: initialized Jul 7 00:11:28.846758 kernel: x86/mm: Memory block size: 128MB Jul 7 00:11:28.846764 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:11:28.846770 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:11:28.846777 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:11:28.846782 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:11:28.846788 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:11:28.846793 kernel: audit: type=2000 audit(1751847087.840:1): state=initialized audit_enabled=0 res=1 Jul 7 00:11:28.846799 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:11:28.846805 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:11:28.846810 kernel: cpuidle: using governor menu Jul 7 00:11:28.846816 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:11:28.846822 kernel: dca service started, version 1.12.1 Jul 7 00:11:28.846827 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 7 00:11:28.846834 kernel: PCI: Using configuration type 1 for base access Jul 7 00:11:28.846840 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:11:28.846845 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:11:28.846851 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:11:28.846857 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:11:28.846862 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:11:28.846868 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:11:28.846873 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:11:28.846890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:11:28.846904 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:11:28.846909 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 00:11:28.846915 kernel: ACPI: Interpreter enabled Jul 7 00:11:28.846920 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:11:28.846926 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:11:28.846932 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:11:28.846937 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:11:28.846943 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 00:11:28.846950 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:11:28.847065 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:11:28.847630 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 00:11:28.847716 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 00:11:28.847726 kernel: PCI host bridge to bus 0000:00 Jul 7 00:11:28.847794 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:11:28.847901 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:11:28.848012 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:11:28.848070 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jul 7 00:11:28.848124 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 00:11:28.848486 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 7 00:11:28.848550 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:11:28.848655 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 7 00:11:28.848738 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jul 7 00:11:28.848809 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jul 7 00:11:28.848872 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jul 7 00:11:28.848935 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jul 7 00:11:28.848998 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jul 7 00:11:28.849059 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:11:28.849128 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.849222 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jul 7 00:11:28.849296 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.849359 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jul 7 00:11:28.849426 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.849502 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jul 7 00:11:28.849571 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.849638 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jul 7 00:11:28.849706 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.849768 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jul 7 00:11:28.849838 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.849903 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jul 7 00:11:28.849973 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.850040 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jul 7 00:11:28.850107 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.852183 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jul 7 00:11:28.852266 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 7 00:11:28.852333 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jul 7 00:11:28.852404 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 7 00:11:28.852487 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 00:11:28.852557 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 7 00:11:28.852619 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jul 7 00:11:28.852679 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jul 7 00:11:28.852746 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 7 00:11:28.852807 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 7 00:11:28.852877 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 7 00:11:28.852946 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jul 7 00:11:28.853009 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 7 00:11:28.853071 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jul 7 00:11:28.853133 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 7 00:11:28.853282 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 00:11:28.853345 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 7 00:11:28.853413 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 7 00:11:28.853500 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jul 7 00:11:28.853563 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 7 00:11:28.853624 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 00:11:28.853684 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 00:11:28.853752 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 7 00:11:28.853816 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jul 7 00:11:28.853883 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jul 7 00:11:28.853946 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 7 00:11:28.854005 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 00:11:28.854066 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 00:11:28.854134 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 7 00:11:28.854865 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 7 00:11:28.854981 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 7 00:11:28.855079 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 00:11:28.855555 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 00:11:28.855636 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 7 00:11:28.855702 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jul 7 00:11:28.855765 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jul 7 00:11:28.855827 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 7 00:11:28.855888 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 00:11:28.855947 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 00:11:28.856022 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 7 00:11:28.856086 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jul 7 00:11:28.856364 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jul 7 00:11:28.856448 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 7 00:11:28.856539 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 00:11:28.856608 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 00:11:28.856616 kernel: acpiphp: Slot [0] registered Jul 7 00:11:28.856699 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 7 00:11:28.858989 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jul 7 00:11:28.859061 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jul 7 00:11:28.859126 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jul 7 00:11:28.859209 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 7 00:11:28.859271 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 00:11:28.859331 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 00:11:28.859340 kernel: acpiphp: Slot [0-2] registered Jul 7 00:11:28.859405 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 7 00:11:28.859474 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jul 7 00:11:28.859540 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 00:11:28.859549 kernel: acpiphp: Slot [0-3] registered Jul 7 00:11:28.859610 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 7 00:11:28.859671 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 00:11:28.859732 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 00:11:28.859740 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:11:28.859746 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:11:28.859755 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:11:28.859761 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:11:28.859767 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 00:11:28.859772 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 00:11:28.859778 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 00:11:28.859784 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 00:11:28.859790 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 00:11:28.859795 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 00:11:28.859801 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 00:11:28.859808 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 00:11:28.859814 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 00:11:28.859820 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 00:11:28.859826 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 00:11:28.859831 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 00:11:28.859837 kernel: iommu: Default domain type: Translated Jul 7 00:11:28.859843 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:11:28.859849 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:11:28.859854 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:11:28.859862 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 00:11:28.859867 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jul 7 00:11:28.859930 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 00:11:28.859992 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 00:11:28.860052 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:11:28.860060 kernel: vgaarb: loaded Jul 7 00:11:28.860066 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 00:11:28.860072 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 00:11:28.860081 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:11:28.860086 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:11:28.860093 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:11:28.860098 kernel: pnp: PnP ACPI init Jul 7 00:11:28.862204 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 00:11:28.862219 kernel: pnp: PnP ACPI: found 5 devices Jul 7 00:11:28.862226 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:11:28.862232 kernel: NET: Registered PF_INET protocol family Jul 7 00:11:28.862238 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:11:28.862248 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 00:11:28.862254 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:11:28.862260 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:11:28.862266 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:11:28.862271 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 00:11:28.862277 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:11:28.862283 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:11:28.862289 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:11:28.862296 kernel: NET: Registered PF_XDP protocol family Jul 7 00:11:28.862366 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 00:11:28.862478 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 00:11:28.862548 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 00:11:28.862612 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jul 7 00:11:28.862675 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jul 7 00:11:28.862776 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jul 7 00:11:28.862849 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 7 00:11:28.862937 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 00:11:28.863006 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 7 00:11:28.863069 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 7 00:11:28.863129 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 00:11:28.863223 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 00:11:28.863288 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 7 00:11:28.863350 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 00:11:28.863412 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 00:11:28.863491 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 7 00:11:28.863555 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 00:11:28.863616 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 00:11:28.863677 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 7 00:11:28.863739 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 00:11:28.863799 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 00:11:28.863866 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 7 00:11:28.863941 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 00:11:28.864009 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 00:11:28.864071 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 7 00:11:28.864135 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jul 7 00:11:28.864222 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 00:11:28.864287 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 00:11:28.864349 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 7 00:11:28.864410 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jul 7 00:11:28.864484 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jul 7 00:11:28.864548 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 00:11:28.864615 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 7 00:11:28.864676 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jul 7 00:11:28.864737 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 00:11:28.864804 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 00:11:28.864865 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:11:28.864919 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:11:28.864974 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:11:28.865027 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jul 7 00:11:28.865080 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 00:11:28.865134 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 7 00:11:28.866736 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 7 00:11:28.866800 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 7 00:11:28.866863 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 7 00:11:28.866921 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 00:11:28.866983 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 7 00:11:28.867039 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 00:11:28.867105 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 7 00:11:28.867182 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 00:11:28.867248 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 7 00:11:28.867307 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 00:11:28.867369 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jul 7 00:11:28.867426 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 00:11:28.867512 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jul 7 00:11:28.867570 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 7 00:11:28.867627 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 00:11:28.867688 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jul 7 00:11:28.867744 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jul 7 00:11:28.867801 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 00:11:28.867862 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jul 7 00:11:28.867925 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 7 00:11:28.867981 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 00:11:28.867991 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 00:11:28.867998 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:11:28.868004 kernel: Initialise system trusted keyrings Jul 7 00:11:28.868010 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 00:11:28.868016 kernel: Key type asymmetric registered Jul 7 00:11:28.868023 kernel: Asymmetric key parser 'x509' registered Jul 7 00:11:28.868031 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 00:11:28.868037 kernel: io scheduler mq-deadline registered Jul 7 00:11:28.868043 kernel: io scheduler kyber registered Jul 7 00:11:28.868049 kernel: io scheduler bfq registered Jul 7 00:11:28.868114 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 7 00:11:28.873258 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 7 00:11:28.873336 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 7 00:11:28.873403 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 7 00:11:28.873483 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 7 00:11:28.873555 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 7 00:11:28.873619 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 7 00:11:28.873682 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 7 00:11:28.873744 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 7 00:11:28.873804 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 7 00:11:28.873866 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 7 00:11:28.873926 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 7 00:11:28.873986 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 7 00:11:28.874054 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 7 00:11:28.874116 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 7 00:11:28.874198 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 7 00:11:28.874208 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 00:11:28.874269 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jul 7 00:11:28.874330 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jul 7 00:11:28.874339 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:11:28.874346 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jul 7 00:11:28.874355 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:11:28.874362 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:11:28.874368 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:11:28.874374 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:11:28.874381 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:11:28.874387 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:11:28.874454 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 00:11:28.874528 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 00:11:28.874590 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T00:11:28 UTC (1751847088) Jul 7 00:11:28.874646 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 00:11:28.874654 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 00:11:28.874661 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:11:28.874667 kernel: Segment Routing with IPv6 Jul 7 00:11:28.874674 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:11:28.874680 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:11:28.874686 kernel: Key type dns_resolver registered Jul 7 00:11:28.874692 kernel: IPI shorthand broadcast: enabled Jul 7 00:11:28.874701 kernel: sched_clock: Marking stable (1082006865, 134859731)->(1223731431, -6864835) Jul 7 00:11:28.874707 kernel: registered taskstats version 1 Jul 7 00:11:28.874713 kernel: Loading compiled-in X.509 certificates Jul 7 00:11:28.874720 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 00:11:28.874726 kernel: Key type .fscrypt registered Jul 7 00:11:28.874732 kernel: Key type fscrypt-provisioning registered Jul 7 00:11:28.874738 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:11:28.874744 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:11:28.874752 kernel: ima: No architecture policies found Jul 7 00:11:28.874759 kernel: clk: Disabling unused clocks Jul 7 00:11:28.874766 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 00:11:28.874772 kernel: Write protecting the kernel read-only data: 36864k Jul 7 00:11:28.874778 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 00:11:28.874785 kernel: Run /init as init process Jul 7 00:11:28.874792 kernel: with arguments: Jul 7 00:11:28.874799 kernel: /init Jul 7 00:11:28.874805 kernel: with environment: Jul 7 00:11:28.874811 kernel: HOME=/ Jul 7 00:11:28.874819 kernel: TERM=linux Jul 7 00:11:28.874825 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:11:28.874833 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:11:28.874842 systemd[1]: Detected virtualization kvm. Jul 7 00:11:28.874848 systemd[1]: Detected architecture x86-64. Jul 7 00:11:28.874855 systemd[1]: Running in initrd. Jul 7 00:11:28.874861 systemd[1]: No hostname configured, using default hostname. Jul 7 00:11:28.874869 systemd[1]: Hostname set to . Jul 7 00:11:28.874875 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:11:28.874882 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:11:28.874888 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:11:28.874895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:11:28.874901 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:11:28.874908 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:11:28.874915 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:11:28.874923 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:11:28.874930 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:11:28.874937 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:11:28.874943 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:11:28.874950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:11:28.874956 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:11:28.874963 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:11:28.874971 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:11:28.874978 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:11:28.874984 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:11:28.874991 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:11:28.874997 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:11:28.875003 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:11:28.875010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:11:28.875017 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:11:28.875023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:11:28.875031 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:11:28.875037 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:11:28.875044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:11:28.875050 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:11:28.875056 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:11:28.875063 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:11:28.875069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:11:28.875076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:11:28.875084 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:11:28.875104 systemd-journald[187]: Collecting audit messages is disabled. Jul 7 00:11:28.875122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:11:28.875129 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:11:28.875138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:11:28.875176 systemd-journald[187]: Journal started Jul 7 00:11:28.875192 systemd-journald[187]: Runtime Journal (/run/log/journal/eaa46f796eec4546899b5a0bbb07f6e8) is 4.8M, max 38.4M, 33.6M free. Jul 7 00:11:28.846922 systemd-modules-load[188]: Inserted module 'overlay' Jul 7 00:11:28.914292 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:11:28.914315 kernel: Bridge firewalling registered Jul 7 00:11:28.914323 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:11:28.878403 systemd-modules-load[188]: Inserted module 'br_netfilter' Jul 7 00:11:28.915027 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:11:28.915841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:11:28.916873 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:11:28.923275 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:11:28.924878 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:11:28.927268 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:11:28.928179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:11:28.939567 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:11:28.947906 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:11:28.950422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:11:28.952449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:11:28.953723 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:11:28.955568 dracut-cmdline[216]: dracut-dracut-053 Jul 7 00:11:28.957924 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:11:28.960303 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:11:28.980667 systemd-resolved[230]: Positive Trust Anchors: Jul 7 00:11:28.980679 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:11:28.980704 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:11:28.988974 systemd-resolved[230]: Defaulting to hostname 'linux'. Jul 7 00:11:28.989873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:11:28.990582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:11:29.009183 kernel: SCSI subsystem initialized Jul 7 00:11:29.016169 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:11:29.025172 kernel: iscsi: registered transport (tcp) Jul 7 00:11:29.041616 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:11:29.041647 kernel: QLogic iSCSI HBA Driver Jul 7 00:11:29.078979 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:11:29.084371 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:11:29.111488 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:11:29.111563 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:11:29.111574 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 00:11:29.149190 kernel: raid6: avx2x4 gen() 35017 MB/s Jul 7 00:11:29.166183 kernel: raid6: avx2x2 gen() 31538 MB/s Jul 7 00:11:29.183303 kernel: raid6: avx2x1 gen() 26449 MB/s Jul 7 00:11:29.183369 kernel: raid6: using algorithm avx2x4 gen() 35017 MB/s Jul 7 00:11:29.201428 kernel: raid6: .... xor() 4493 MB/s, rmw enabled Jul 7 00:11:29.201516 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:11:29.219220 kernel: xor: automatically using best checksumming function avx Jul 7 00:11:29.335201 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:11:29.348091 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:11:29.353321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:11:29.387624 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jul 7 00:11:29.392908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:11:29.401334 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:11:29.417348 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jul 7 00:11:29.452248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:11:29.457288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:11:29.515549 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:11:29.521508 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:11:29.545552 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:11:29.546619 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:11:29.548000 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:11:29.549584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:11:29.558354 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:11:29.572953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:11:29.593204 kernel: scsi host0: Virtio SCSI HBA Jul 7 00:11:29.597172 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:11:29.607188 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 7 00:11:29.607752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:11:29.609732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:11:29.611329 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:11:29.612273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:11:29.612374 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:11:29.638880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:11:29.643175 kernel: libata version 3.00 loaded. Jul 7 00:11:29.649330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:11:29.652158 kernel: ACPI: bus type USB registered Jul 7 00:11:29.656162 kernel: usbcore: registered new interface driver usbfs Jul 7 00:11:29.656202 kernel: usbcore: registered new interface driver hub Jul 7 00:11:29.657517 kernel: usbcore: registered new device driver usb Jul 7 00:11:29.688242 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 7 00:11:29.688505 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 7 00:11:29.688671 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 7 00:11:29.690156 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 7 00:11:29.690276 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 7 00:11:29.690362 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 7 00:11:29.690440 kernel: hub 1-0:1.0: USB hub found Jul 7 00:11:29.690546 kernel: hub 1-0:1.0: 4 ports detected Jul 7 00:11:29.690625 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 00:11:29.690713 kernel: hub 2-0:1.0: USB hub found Jul 7 00:11:29.690803 kernel: hub 2-0:1.0: 4 ports detected Jul 7 00:11:29.698187 kernel: AVX2 version of gcm_enc/dec engaged. Jul 7 00:11:29.698209 kernel: AES CTR mode by8 optimization enabled Jul 7 00:11:29.699491 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 00:11:29.699615 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 00:11:29.699626 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 7 00:11:29.699707 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 00:11:29.703158 kernel: scsi host1: ahci Jul 7 00:11:29.704161 kernel: scsi host2: ahci Jul 7 00:11:29.704267 kernel: scsi host3: ahci Jul 7 00:11:29.705217 kernel: scsi host4: ahci Jul 7 00:11:29.708161 kernel: scsi host5: ahci Jul 7 00:11:29.708277 kernel: scsi host6: ahci Jul 7 00:11:29.709209 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Jul 7 00:11:29.709236 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Jul 7 00:11:29.709245 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Jul 7 00:11:29.709252 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Jul 7 00:11:29.709259 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Jul 7 00:11:29.709266 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Jul 7 00:11:29.748960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:11:29.755268 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:11:29.765175 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:11:29.930219 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 7 00:11:30.026540 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 00:11:30.026640 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:11:30.026664 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:11:30.026698 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 00:11:30.026716 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 00:11:30.029543 kernel: ata1.00: applying bridge limits Jul 7 00:11:30.032606 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:11:30.033158 kernel: ata1.00: configured for UDMA/100 Jul 7 00:11:30.036411 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 00:11:30.037588 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 00:11:30.082264 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 7 00:11:30.085648 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 7 00:11:30.087686 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 00:11:30.087872 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 7 00:11:30.091171 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:11:30.091254 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:11:30.099965 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:11:30.099992 kernel: GPT:17805311 != 80003071 Jul 7 00:11:30.100001 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:11:30.103633 kernel: GPT:17805311 != 80003071 Jul 7 00:11:30.103650 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:11:30.106204 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:11:30.116186 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 00:11:30.120492 kernel: usbcore: registered new interface driver usbhid Jul 7 00:11:30.120522 kernel: usbhid: USB HID core driver Jul 7 00:11:30.125490 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jul 7 00:11:30.125518 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 7 00:11:30.125663 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 00:11:30.129434 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:11:30.142316 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 7 00:11:30.147777 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (467) Jul 7 00:11:30.156195 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (453) Jul 7 00:11:30.163888 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 7 00:11:30.168788 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 7 00:11:30.173737 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 00:11:30.177294 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 7 00:11:30.177844 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 7 00:11:30.183386 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:11:30.187988 disk-uuid[578]: Primary Header is updated. Jul 7 00:11:30.187988 disk-uuid[578]: Secondary Entries is updated. Jul 7 00:11:30.187988 disk-uuid[578]: Secondary Header is updated. Jul 7 00:11:30.201215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:11:30.216165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:11:30.221246 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:11:31.223232 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:11:31.223594 disk-uuid[579]: The operation has completed successfully. Jul 7 00:11:31.278566 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:11:31.278724 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:11:31.299250 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:11:31.303503 sh[599]: Success Jul 7 00:11:31.313165 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 7 00:11:31.359671 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:11:31.370550 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:11:31.371244 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:11:31.389551 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 00:11:31.389614 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:11:31.391742 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 00:11:31.395186 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 00:11:31.395243 kernel: BTRFS info (device dm-0): using free space tree Jul 7 00:11:31.405180 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 00:11:31.407586 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:11:31.408786 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:11:31.415321 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:11:31.418233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:11:31.432376 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:11:31.432420 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:11:31.432436 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:11:31.441188 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:11:31.441218 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:11:31.450726 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 00:11:31.453193 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:11:31.458104 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:11:31.463371 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:11:31.514602 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:11:31.526804 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:11:31.545091 ignition[706]: Ignition 2.19.0 Jul 7 00:11:31.545105 ignition[706]: Stage: fetch-offline Jul 7 00:11:31.545177 ignition[706]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:11:31.548618 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:11:31.545188 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:11:31.545301 ignition[706]: parsed url from cmdline: "" Jul 7 00:11:31.545306 ignition[706]: no config URL provided Jul 7 00:11:31.545316 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:11:31.545326 ignition[706]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:11:31.552885 systemd-networkd[780]: lo: Link UP Jul 7 00:11:31.545332 ignition[706]: failed to fetch config: resource requires networking Jul 7 00:11:31.552890 systemd-networkd[780]: lo: Gained carrier Jul 7 00:11:31.545604 ignition[706]: Ignition finished successfully Jul 7 00:11:31.555708 systemd-networkd[780]: Enumeration completed Jul 7 00:11:31.555932 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:11:31.556649 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:11:31.556653 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:11:31.558112 systemd[1]: Reached target network.target - Network. Jul 7 00:11:31.559306 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:11:31.559311 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:11:31.560102 systemd-networkd[780]: eth0: Link UP Jul 7 00:11:31.560106 systemd-networkd[780]: eth0: Gained carrier Jul 7 00:11:31.560117 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:11:31.565419 systemd-networkd[780]: eth1: Link UP Jul 7 00:11:31.565423 systemd-networkd[780]: eth1: Gained carrier Jul 7 00:11:31.565432 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:11:31.567334 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:11:31.579260 ignition[788]: Ignition 2.19.0 Jul 7 00:11:31.579268 ignition[788]: Stage: fetch Jul 7 00:11:31.580011 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:11:31.580020 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:11:31.580099 ignition[788]: parsed url from cmdline: "" Jul 7 00:11:31.580102 ignition[788]: no config URL provided Jul 7 00:11:31.580107 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:11:31.580112 ignition[788]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:11:31.580127 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 7 00:11:31.580277 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 7 00:11:31.598208 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:11:31.629277 systemd-networkd[780]: eth0: DHCPv4 address 65.21.182.235/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 7 00:11:31.781384 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 7 00:11:31.785390 ignition[788]: GET result: OK Jul 7 00:11:31.785484 ignition[788]: parsing config with SHA512: 7eba000e59796b504d01c13cf206ac51f8bbdd11f333bd604831e6d60efbc3facf7c7307f8ffb0e5a6c633d70b885525dd41af09dcc3c1dd9ab8b77d70035450 Jul 7 00:11:31.789109 unknown[788]: fetched base config from "system" Jul 7 00:11:31.789120 unknown[788]: fetched base config from "system" Jul 7 00:11:31.789643 ignition[788]: fetch: fetch complete Jul 7 00:11:31.789125 unknown[788]: fetched user config from "hetzner" Jul 7 00:11:31.789654 ignition[788]: fetch: fetch passed Jul 7 00:11:31.792282 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:11:31.789694 ignition[788]: Ignition finished successfully Jul 7 00:11:31.798437 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:11:31.818553 ignition[795]: Ignition 2.19.0 Jul 7 00:11:31.818576 ignition[795]: Stage: kargs Jul 7 00:11:31.818840 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:11:31.818856 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:11:31.820437 ignition[795]: kargs: kargs passed Jul 7 00:11:31.820569 ignition[795]: Ignition finished successfully Jul 7 00:11:31.822341 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:11:31.829304 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:11:31.841533 ignition[801]: Ignition 2.19.0 Jul 7 00:11:31.841546 ignition[801]: Stage: disks Jul 7 00:11:31.841710 ignition[801]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:11:31.843686 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:11:31.841720 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:11:31.842554 ignition[801]: disks: disks passed Jul 7 00:11:31.842590 ignition[801]: Ignition finished successfully Jul 7 00:11:31.846716 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:11:31.847237 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:11:31.849236 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:11:31.850003 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:11:31.851097 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:11:31.856262 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:11:31.871725 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 00:11:31.873683 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:11:31.878358 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:11:31.964171 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 00:11:31.964880 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:11:31.965817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:11:31.971245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:11:31.973017 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:11:31.976303 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:11:31.980202 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:11:31.981199 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:11:31.993801 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (817) Jul 7 00:11:31.993825 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:11:31.993841 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:11:31.993849 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:11:31.993856 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:11:31.993863 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:11:31.997550 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:11:31.998106 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:11:32.006341 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:11:32.046995 coreos-metadata[819]: Jul 07 00:11:32.046 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 7 00:11:32.048754 coreos-metadata[819]: Jul 07 00:11:32.048 INFO Fetch successful Jul 7 00:11:32.050744 coreos-metadata[819]: Jul 07 00:11:32.050 INFO wrote hostname ci-4081-3-4-f-11cbdd5b1a to /sysroot/etc/hostname Jul 7 00:11:32.052131 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:11:32.051919 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:11:32.059435 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:11:32.063642 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:11:32.066974 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:11:32.138742 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:11:32.145223 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:11:32.147868 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:11:32.155171 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:11:32.171657 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:11:32.176387 ignition[934]: INFO : Ignition 2.19.0 Jul 7 00:11:32.176387 ignition[934]: INFO : Stage: mount Jul 7 00:11:32.177704 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:11:32.177704 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:11:32.177704 ignition[934]: INFO : mount: mount passed Jul 7 00:11:32.177704 ignition[934]: INFO : Ignition finished successfully Jul 7 00:11:32.178974 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:11:32.185268 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:11:32.387638 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:11:32.397334 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:11:32.411180 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Jul 7 00:11:32.414806 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:11:32.414851 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:11:32.419941 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:11:32.426816 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:11:32.426862 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:11:32.430485 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:11:32.453604 ignition[962]: INFO : Ignition 2.19.0 Jul 7 00:11:32.453604 ignition[962]: INFO : Stage: files Jul 7 00:11:32.455094 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:11:32.455094 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:11:32.455094 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:11:32.458563 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:11:32.458563 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:11:32.460715 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:11:32.460715 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:11:32.460715 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:11:32.459668 unknown[962]: wrote ssh authorized keys file for user: core Jul 7 00:11:32.463957 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:11:32.463957 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:11:32.676406 systemd-networkd[780]: eth0: Gained IPv6LL Jul 7 00:11:32.723725 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:11:32.740338 systemd-networkd[780]: eth1: Gained IPv6LL Jul 7 00:11:34.209545 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:11:34.209545 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:11:34.212268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:11:35.021913 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:11:35.179993 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:11:35.179993 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:11:35.181898 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:11:35.181898 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:11:35.181898 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:11:35.181898 ignition[962]: INFO : files: files passed Jul 7 00:11:35.181898 ignition[962]: INFO : Ignition finished successfully Jul 7 00:11:35.182590 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:11:35.192362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:11:35.194756 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:11:35.196133 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:11:35.196738 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:11:35.204505 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:11:35.204505 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:11:35.206211 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:11:35.207056 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:11:35.208028 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:11:35.213337 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:11:35.229404 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:11:35.229500 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:11:35.230291 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:11:35.231232 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:11:35.232283 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:11:35.241308 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:11:35.249569 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:11:35.259283 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:11:35.266058 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:11:35.266643 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:11:35.267735 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:11:35.268713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:11:35.268798 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:11:35.269964 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:11:35.270624 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:11:35.271590 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:11:35.272482 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:11:35.273397 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:11:35.274457 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:11:35.275488 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:11:35.276540 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:11:35.277618 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:11:35.278625 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:11:35.279614 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:11:35.279702 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:11:35.280808 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:11:35.281477 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:11:35.282403 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:11:35.282505 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:11:35.283452 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:11:35.283534 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:11:35.284914 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:11:35.285003 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:11:35.285697 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:11:35.285805 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:11:35.286571 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:11:35.286646 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:11:35.293583 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:11:35.294039 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:11:35.294195 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:11:35.298363 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:11:35.298935 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:11:35.299115 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:11:35.300906 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:11:35.301064 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:11:35.310371 ignition[1016]: INFO : Ignition 2.19.0 Jul 7 00:11:35.310371 ignition[1016]: INFO : Stage: umount Jul 7 00:11:35.310371 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:11:35.310371 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 7 00:11:35.310371 ignition[1016]: INFO : umount: umount passed Jul 7 00:11:35.310371 ignition[1016]: INFO : Ignition finished successfully Jul 7 00:11:35.311891 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:11:35.312003 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:11:35.313570 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:11:35.314578 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:11:35.322081 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:11:35.325808 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:11:35.325847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:11:35.326654 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:11:35.326686 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:11:35.330043 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:11:35.330076 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:11:35.332498 systemd[1]: Stopped target network.target - Network. Jul 7 00:11:35.335577 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:11:35.335617 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:11:35.339137 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:11:35.341595 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:11:35.346485 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:11:35.347217 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:11:35.347616 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:11:35.348031 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:11:35.348063 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:11:35.348662 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:11:35.348691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:11:35.349541 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:11:35.349579 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:11:35.350628 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:11:35.350662 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:11:35.352075 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:11:35.353084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:11:35.354255 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:11:35.354323 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:11:35.355362 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:11:35.355422 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:11:35.358298 systemd-networkd[780]: eth0: DHCPv6 lease lost Jul 7 00:11:35.360396 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:11:35.360497 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:11:35.362341 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:11:35.362381 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:11:35.363186 systemd-networkd[780]: eth1: DHCPv6 lease lost Jul 7 00:11:35.365732 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:11:35.365822 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:11:35.367110 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:11:35.367135 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:11:35.372262 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:11:35.372891 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:11:35.372932 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:11:35.373418 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:11:35.373460 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:11:35.374573 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:11:35.374603 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:11:35.375680 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:11:35.384467 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:11:35.385162 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:11:35.389523 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:11:35.389634 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:11:35.390475 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:11:35.390520 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:11:35.391168 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:11:35.391197 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:11:35.392190 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:11:35.392223 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:11:35.393629 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:11:35.393660 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:11:35.394712 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:11:35.394745 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:11:35.404372 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:11:35.405426 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:11:35.405497 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:11:35.408303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:11:35.408349 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:11:35.409666 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:11:35.409758 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:11:35.410825 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:11:35.416269 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:11:35.423119 systemd[1]: Switching root. Jul 7 00:11:35.456115 systemd-journald[187]: Journal stopped Jul 7 00:11:36.190394 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jul 7 00:11:36.190475 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:11:36.190492 kernel: SELinux: policy capability open_perms=1 Jul 7 00:11:36.190512 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:11:36.190524 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:11:36.190537 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:11:36.190552 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:11:36.190564 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:11:36.190576 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:11:36.190590 kernel: audit: type=1403 audit(1751847095.561:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:11:36.190604 systemd[1]: Successfully loaded SELinux policy in 37.684ms. Jul 7 00:11:36.190626 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.863ms. Jul 7 00:11:36.190641 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:11:36.190655 systemd[1]: Detected virtualization kvm. Jul 7 00:11:36.190671 systemd[1]: Detected architecture x86-64. Jul 7 00:11:36.190684 systemd[1]: Detected first boot. Jul 7 00:11:36.190698 systemd[1]: Hostname set to . Jul 7 00:11:36.190714 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:11:36.190728 zram_generator::config[1058]: No configuration found. Jul 7 00:11:36.190742 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:11:36.190759 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:11:36.190773 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:11:36.190790 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:11:36.190799 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:11:36.190807 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:11:36.190815 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:11:36.190823 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:11:36.190831 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:11:36.190839 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:11:36.190846 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:11:36.190856 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:11:36.190865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:11:36.190873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:11:36.190881 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:11:36.190889 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:11:36.190897 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:11:36.190905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:11:36.190913 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:11:36.190920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:11:36.190930 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:11:36.190939 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:11:36.190950 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:11:36.190958 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:11:36.190966 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:11:36.190974 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:11:36.190981 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:11:36.190991 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:11:36.190999 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:11:36.191007 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:11:36.191018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:11:36.191025 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:11:36.191034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:11:36.191042 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:11:36.191050 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:11:36.191058 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:11:36.191068 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:11:36.191076 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:36.191084 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:11:36.191092 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:11:36.191100 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:11:36.191112 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:11:36.191122 systemd[1]: Reached target machines.target - Containers. Jul 7 00:11:36.191130 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:11:36.193892 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:11:36.193914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:11:36.193924 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:11:36.193932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:11:36.193941 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:11:36.193950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:11:36.193961 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:11:36.193969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:11:36.193977 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:11:36.193986 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:11:36.193993 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:11:36.194001 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:11:36.194009 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:11:36.194017 kernel: loop: module loaded Jul 7 00:11:36.194026 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:11:36.194035 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:11:36.194043 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:11:36.194051 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:11:36.194088 kernel: fuse: init (API version 7.39) Jul 7 00:11:36.194112 systemd-journald[1142]: Collecting audit messages is disabled. Jul 7 00:11:36.194130 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:11:36.194151 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:11:36.194163 systemd[1]: Stopped verity-setup.service. Jul 7 00:11:36.194171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:36.194180 systemd-journald[1142]: Journal started Jul 7 00:11:36.194196 systemd-journald[1142]: Runtime Journal (/run/log/journal/eaa46f796eec4546899b5a0bbb07f6e8) is 4.8M, max 38.4M, 33.6M free. Jul 7 00:11:36.201407 kernel: ACPI: bus type drm_connector registered Jul 7 00:11:35.960284 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:11:35.978662 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 00:11:35.979005 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:11:36.204426 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:11:36.206189 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:11:36.206674 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:11:36.210776 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:11:36.211327 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:11:36.211896 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:11:36.212474 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:11:36.213092 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:11:36.213796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:11:36.214553 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:11:36.214720 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:11:36.215399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:11:36.215571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:11:36.216255 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:11:36.216416 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:11:36.217063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:11:36.217485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:11:36.218222 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:11:36.218382 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:11:36.219047 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:11:36.219239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:11:36.219920 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:11:36.220616 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:11:36.221340 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:11:36.227523 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:11:36.231793 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:11:36.236243 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:11:36.236745 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:11:36.236767 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:11:36.238266 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 00:11:36.243403 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:11:36.247322 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:11:36.247921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:11:36.255555 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:11:36.258254 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:11:36.258772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:11:36.262253 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:11:36.264208 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:11:36.269286 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:11:36.276238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:11:36.278714 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:11:36.289654 kernel: loop0: detected capacity change from 0 to 8 Jul 7 00:11:36.282550 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:11:36.284817 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:11:36.285802 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:11:36.298274 systemd-journald[1142]: Time spent on flushing to /var/log/journal/eaa46f796eec4546899b5a0bbb07f6e8 is 32.956ms for 1132 entries. Jul 7 00:11:36.298274 systemd-journald[1142]: System Journal (/var/log/journal/eaa46f796eec4546899b5a0bbb07f6e8) is 8.0M, max 584.8M, 576.8M free. Jul 7 00:11:36.352727 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:11:36.352758 systemd-journald[1142]: Received client request to flush runtime journal. Jul 7 00:11:36.352846 kernel: loop1: detected capacity change from 0 to 140768 Jul 7 00:11:36.305117 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:11:36.306287 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:11:36.315310 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 00:11:36.316248 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:11:36.322995 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 00:11:36.332282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:11:36.349438 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 00:11:36.353794 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:11:36.371314 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:11:36.371765 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 00:11:36.374985 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:11:36.384248 kernel: loop2: detected capacity change from 0 to 142488 Jul 7 00:11:36.383307 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:11:36.404078 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jul 7 00:11:36.404632 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jul 7 00:11:36.410541 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:11:36.426490 kernel: loop3: detected capacity change from 0 to 224512 Jul 7 00:11:36.466193 kernel: loop4: detected capacity change from 0 to 8 Jul 7 00:11:36.468261 kernel: loop5: detected capacity change from 0 to 140768 Jul 7 00:11:36.487164 kernel: loop6: detected capacity change from 0 to 142488 Jul 7 00:11:36.505297 kernel: loop7: detected capacity change from 0 to 224512 Jul 7 00:11:36.526671 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 7 00:11:36.527502 (sd-merge)[1205]: Merged extensions into '/usr'. Jul 7 00:11:36.531033 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:11:36.531113 systemd[1]: Reloading... Jul 7 00:11:36.589169 zram_generator::config[1227]: No configuration found. Jul 7 00:11:36.696188 ldconfig[1174]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:11:36.706617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:11:36.743443 systemd[1]: Reloading finished in 211 ms. Jul 7 00:11:36.765248 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:11:36.766004 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:11:36.774299 systemd[1]: Starting ensure-sysext.service... Jul 7 00:11:36.776059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:11:36.787278 systemd[1]: Reloading requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:11:36.787291 systemd[1]: Reloading... Jul 7 00:11:36.793356 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:11:36.793821 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:11:36.794413 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:11:36.794663 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jul 7 00:11:36.794714 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jul 7 00:11:36.798365 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:11:36.798452 systemd-tmpfiles[1275]: Skipping /boot Jul 7 00:11:36.805759 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:11:36.805828 systemd-tmpfiles[1275]: Skipping /boot Jul 7 00:11:36.831825 zram_generator::config[1299]: No configuration found. Jul 7 00:11:36.906636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:11:36.943346 systemd[1]: Reloading finished in 155 ms. Jul 7 00:11:36.956941 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:11:36.961420 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:11:36.975287 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:11:36.978175 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:11:36.980417 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:11:36.985311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:11:36.988337 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:11:36.991889 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:11:36.997520 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:36.997646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:11:36.998931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:11:37.002097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:11:37.003390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:11:37.004381 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:11:37.006583 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:11:37.007327 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:37.009032 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:37.009162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:11:37.009270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:11:37.009333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:37.011530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:37.011673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:11:37.018333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:11:37.018947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:11:37.019264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:37.022972 systemd[1]: Finished ensure-sysext.service. Jul 7 00:11:37.026407 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:11:37.029302 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:11:37.036411 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:11:37.036627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:11:37.037828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:11:37.037911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:11:37.039226 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:11:37.039889 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:11:37.040692 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:11:37.040776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:11:37.044674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:11:37.044714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:11:37.048526 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Jul 7 00:11:37.050257 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:11:37.056318 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:11:37.061269 augenrules[1382]: No rules Jul 7 00:11:37.062697 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:11:37.068127 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:11:37.074081 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:11:37.082368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:11:37.094388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:11:37.096495 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:11:37.099044 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:11:37.129494 systemd-resolved[1356]: Positive Trust Anchors: Jul 7 00:11:37.130514 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:11:37.130584 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:11:37.134985 systemd-resolved[1356]: Using system hostname 'ci-4081-3-4-f-11cbdd5b1a'. Jul 7 00:11:37.138230 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:11:37.139248 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:11:37.163768 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:11:37.167875 systemd-networkd[1399]: lo: Link UP Jul 7 00:11:37.167885 systemd-networkd[1399]: lo: Gained carrier Jul 7 00:11:37.170583 systemd-networkd[1399]: Enumeration completed Jul 7 00:11:37.170661 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:11:37.171806 systemd[1]: Reached target network.target - Network. Jul 7 00:11:37.178300 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:11:37.179480 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:11:37.180586 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:11:37.198103 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:11:37.198254 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:11:37.199298 systemd-networkd[1399]: eth0: Link UP Jul 7 00:11:37.199394 systemd-networkd[1399]: eth0: Gained carrier Jul 7 00:11:37.199468 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:11:37.221165 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1410) Jul 7 00:11:37.238858 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:11:37.238971 systemd-networkd[1399]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:11:37.240385 systemd-networkd[1399]: eth1: Link UP Jul 7 00:11:37.240456 systemd-networkd[1399]: eth1: Gained carrier Jul 7 00:11:37.240504 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:11:37.259290 systemd-networkd[1399]: eth0: DHCPv4 address 65.21.182.235/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 7 00:11:37.260977 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Jul 7 00:11:37.273171 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:11:37.279164 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:11:37.280242 systemd-networkd[1399]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:11:37.283224 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:11:37.292988 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 7 00:11:37.293034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:37.293107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:11:37.299352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:11:37.301278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:11:37.308254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:11:37.309104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:11:37.309130 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:11:37.309154 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:11:37.309412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:11:37.309955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:11:37.310807 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:11:37.311215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:11:37.311973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:11:37.312368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:11:37.324159 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 7 00:11:37.324194 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:11:37.327202 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 00:11:37.332392 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 00:11:37.332577 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 7 00:11:37.332695 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 00:11:37.334528 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:11:37.335550 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:11:37.335701 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:11:37.346579 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jul 7 00:11:37.346629 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jul 7 00:11:37.349189 kernel: Console: switching to colour dummy device 80x25 Jul 7 00:11:37.350499 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 00:11:37.350531 kernel: [drm] features: -context_init Jul 7 00:11:37.351038 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:11:37.352284 kernel: [drm] number of scanouts: 1 Jul 7 00:11:37.352336 kernel: [drm] number of cap sets: 0 Jul 7 00:11:37.353161 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 7 00:11:37.355171 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 7 00:11:37.355203 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 00:11:37.365167 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 7 00:11:37.380100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:11:37.393010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:11:37.393173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:11:37.404520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:11:37.411692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:11:37.411893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:11:37.413675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:11:37.463228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:11:37.521566 systemd-timesyncd[1370]: Contacted time server 130.61.133.198:123 (0.flatcar.pool.ntp.org). Jul 7 00:11:37.521615 systemd-timesyncd[1370]: Initial clock synchronization to Mon 2025-07-07 00:11:37.855239 UTC. Jul 7 00:11:37.548129 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 00:11:37.552319 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 00:11:37.564113 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:11:37.597690 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 00:11:37.600448 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:11:37.600542 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:11:37.600710 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:11:37.600812 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:11:37.601042 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:11:37.601233 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:11:37.601301 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:11:37.601358 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:11:37.601378 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:11:37.601439 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:11:37.604610 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:11:37.605981 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:11:37.611527 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:11:37.612647 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 00:11:37.613071 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:11:37.613713 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:11:37.614126 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:11:37.617405 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:11:37.617454 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:11:37.621259 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:11:37.622299 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:11:37.624944 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:11:37.635888 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:11:37.639944 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:11:37.649622 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:11:37.651015 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:11:37.652831 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:11:37.654557 jq[1468]: false Jul 7 00:11:37.657272 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:11:37.659354 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 7 00:11:37.663201 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:11:37.668357 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:11:37.671572 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:11:37.673113 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:11:37.674562 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:11:37.676957 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:11:37.687769 dbus-daemon[1465]: [system] SELinux support is enabled Jul 7 00:11:37.688266 extend-filesystems[1469]: Found loop4 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found loop5 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found loop6 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found loop7 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found sda Jul 7 00:11:37.688266 extend-filesystems[1469]: Found sda1 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found sda2 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found sda3 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found usr Jul 7 00:11:37.688266 extend-filesystems[1469]: Found sda4 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found sda6 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found sda7 Jul 7 00:11:37.688266 extend-filesystems[1469]: Found sda9 Jul 7 00:11:37.688266 extend-filesystems[1469]: Checking size of /dev/sda9 Jul 7 00:11:37.738104 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 7 00:11:37.738127 coreos-metadata[1464]: Jul 07 00:11:37.682 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 7 00:11:37.738127 coreos-metadata[1464]: Jul 07 00:11:37.684 INFO Fetch successful Jul 7 00:11:37.738127 coreos-metadata[1464]: Jul 07 00:11:37.684 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 7 00:11:37.738127 coreos-metadata[1464]: Jul 07 00:11:37.685 INFO Fetch successful Jul 7 00:11:37.685257 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:11:37.739376 extend-filesystems[1469]: Resized partition /dev/sda9 Jul 7 00:11:37.686315 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 00:11:37.745338 jq[1478]: true Jul 7 00:11:37.745485 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Jul 7 00:11:37.693335 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:11:37.751512 update_engine[1477]: I20250707 00:11:37.715709 1477 main.cc:92] Flatcar Update Engine starting Jul 7 00:11:37.751512 update_engine[1477]: I20250707 00:11:37.723649 1477 update_check_scheduler.cc:74] Next update check in 7m28s Jul 7 00:11:37.707462 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:11:37.707590 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:11:37.711001 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:11:37.711269 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:11:37.735535 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:11:37.735570 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:11:37.741277 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:11:37.741297 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:11:37.743774 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:11:37.743900 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:11:37.747948 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:11:37.756482 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:11:37.763084 jq[1495]: true Jul 7 00:11:37.773106 tar[1487]: linux-amd64/LICENSE Jul 7 00:11:37.773106 tar[1487]: linux-amd64/helm Jul 7 00:11:37.773694 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:11:37.784245 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1404) Jul 7 00:11:37.809895 systemd-logind[1476]: New seat seat0. Jul 7 00:11:37.816480 systemd-logind[1476]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:11:37.816497 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:11:37.816758 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:11:37.885027 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:11:37.894364 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:11:37.896282 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:11:37.897591 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:11:37.909155 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 7 00:11:37.911458 systemd[1]: Starting sshkeys.service... Jul 7 00:11:37.927483 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 00:11:37.927483 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 7 00:11:37.927483 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 7 00:11:37.934737 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Jul 7 00:11:37.934737 extend-filesystems[1469]: Found sr0 Jul 7 00:11:37.932881 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:11:37.933022 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:11:37.940390 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:11:37.946727 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:11:37.979201 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:11:37.985719 coreos-metadata[1546]: Jul 07 00:11:37.985 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 7 00:11:37.988378 coreos-metadata[1546]: Jul 07 00:11:37.988 INFO Fetch successful Jul 7 00:11:37.990801 unknown[1546]: wrote ssh authorized keys file for user: core Jul 7 00:11:38.019378 update-ssh-keys[1552]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:11:38.020433 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:11:38.023237 systemd[1]: Finished sshkeys.service. Jul 7 00:11:38.091242 containerd[1507]: time="2025-07-07T00:11:38.090131875Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 00:11:38.143830 containerd[1507]: time="2025-07-07T00:11:38.143455672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147074 containerd[1507]: time="2025-07-07T00:11:38.146857369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147074 containerd[1507]: time="2025-07-07T00:11:38.146917746Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 00:11:38.147074 containerd[1507]: time="2025-07-07T00:11:38.146931773Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 00:11:38.147145 containerd[1507]: time="2025-07-07T00:11:38.147085142Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 00:11:38.147145 containerd[1507]: time="2025-07-07T00:11:38.147100588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147379 containerd[1507]: time="2025-07-07T00:11:38.147154641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147379 containerd[1507]: time="2025-07-07T00:11:38.147165694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147379 containerd[1507]: time="2025-07-07T00:11:38.147367853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147430 containerd[1507]: time="2025-07-07T00:11:38.147382413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147430 containerd[1507]: time="2025-07-07T00:11:38.147393934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147430 containerd[1507]: time="2025-07-07T00:11:38.147401867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147477 containerd[1507]: time="2025-07-07T00:11:38.147468631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:11:38.147649 containerd[1507]: time="2025-07-07T00:11:38.147628418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:11:38.149336 containerd[1507]: time="2025-07-07T00:11:38.149261793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:11:38.149336 containerd[1507]: time="2025-07-07T00:11:38.149278314Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 00:11:38.149386 containerd[1507]: time="2025-07-07T00:11:38.149347625Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 00:11:38.149413 containerd[1507]: time="2025-07-07T00:11:38.149387891Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:11:38.153932 containerd[1507]: time="2025-07-07T00:11:38.153215711Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 00:11:38.153932 containerd[1507]: time="2025-07-07T00:11:38.153260245Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 00:11:38.153932 containerd[1507]: time="2025-07-07T00:11:38.153275055Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 00:11:38.153932 containerd[1507]: time="2025-07-07T00:11:38.153291973Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 00:11:38.153932 containerd[1507]: time="2025-07-07T00:11:38.153303255Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 00:11:38.153932 containerd[1507]: time="2025-07-07T00:11:38.153425856Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 00:11:38.154315 containerd[1507]: time="2025-07-07T00:11:38.154297107Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 00:11:38.154420 containerd[1507]: time="2025-07-07T00:11:38.154399481Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 00:11:38.154442 containerd[1507]: time="2025-07-07T00:11:38.154420521Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 00:11:38.154456 containerd[1507]: time="2025-07-07T00:11:38.154444495Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 00:11:38.154477 containerd[1507]: time="2025-07-07T00:11:38.154457165Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 00:11:38.154477 containerd[1507]: time="2025-07-07T00:11:38.154467268Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 00:11:38.154504 containerd[1507]: time="2025-07-07T00:11:38.154476181Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 00:11:38.154504 containerd[1507]: time="2025-07-07T00:11:38.154486524Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 00:11:38.154504 containerd[1507]: time="2025-07-07T00:11:38.154497242Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 00:11:38.154541 containerd[1507]: time="2025-07-07T00:11:38.154506876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 00:11:38.154541 containerd[1507]: time="2025-07-07T00:11:38.154516112Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 00:11:38.154541 containerd[1507]: time="2025-07-07T00:11:38.154524910Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 00:11:38.154581 containerd[1507]: time="2025-07-07T00:11:38.154540879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154581 containerd[1507]: time="2025-07-07T00:11:38.154551325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154581 containerd[1507]: time="2025-07-07T00:11:38.154560792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154581 containerd[1507]: time="2025-07-07T00:11:38.154570175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154581 containerd[1507]: time="2025-07-07T00:11:38.154579808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154670 containerd[1507]: time="2025-07-07T00:11:38.154590098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154670 containerd[1507]: time="2025-07-07T00:11:38.154598887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154670 containerd[1507]: time="2025-07-07T00:11:38.154621305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154670 containerd[1507]: time="2025-07-07T00:11:38.154631595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154670 containerd[1507]: time="2025-07-07T00:11:38.154643138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154670 containerd[1507]: time="2025-07-07T00:11:38.154652072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154670 containerd[1507]: time="2025-07-07T00:11:38.154666944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154765 containerd[1507]: time="2025-07-07T00:11:38.154676171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154765 containerd[1507]: time="2025-07-07T00:11:38.154692567Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 00:11:38.154765 containerd[1507]: time="2025-07-07T00:11:38.154709359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154765 containerd[1507]: time="2025-07-07T00:11:38.154719661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.154765 containerd[1507]: time="2025-07-07T00:11:38.154727885Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 00:11:38.155228 containerd[1507]: time="2025-07-07T00:11:38.155212235Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 00:11:38.155260 containerd[1507]: time="2025-07-07T00:11:38.155234496Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 00:11:38.155260 containerd[1507]: time="2025-07-07T00:11:38.155243805Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 00:11:38.155452 containerd[1507]: time="2025-07-07T00:11:38.155253272Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 00:11:38.155452 containerd[1507]: time="2025-07-07T00:11:38.155307293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.155452 containerd[1507]: time="2025-07-07T00:11:38.155319117Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 00:11:38.155452 containerd[1507]: time="2025-07-07T00:11:38.155327070Z" level=info msg="NRI interface is disabled by configuration." Jul 7 00:11:38.155452 containerd[1507]: time="2025-07-07T00:11:38.155335013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 00:11:38.155581 containerd[1507]: time="2025-07-07T00:11:38.155533801Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 00:11:38.155693 containerd[1507]: time="2025-07-07T00:11:38.155585704Z" level=info msg="Connect containerd service" Jul 7 00:11:38.155693 containerd[1507]: time="2025-07-07T00:11:38.155608779Z" level=info msg="using legacy CRI server" Jul 7 00:11:38.155693 containerd[1507]: time="2025-07-07T00:11:38.155614186Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:11:38.155737 containerd[1507]: time="2025-07-07T00:11:38.155695937Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 00:11:38.158087 containerd[1507]: time="2025-07-07T00:11:38.157522048Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:11:38.158230 containerd[1507]: time="2025-07-07T00:11:38.158200145Z" level=info msg="Start subscribing containerd event" Jul 7 00:11:38.158548 containerd[1507]: time="2025-07-07T00:11:38.158285206Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:11:38.158607 containerd[1507]: time="2025-07-07T00:11:38.158581099Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:11:38.158607 containerd[1507]: time="2025-07-07T00:11:38.158535928Z" level=info msg="Start recovering state" Jul 7 00:11:38.158667 containerd[1507]: time="2025-07-07T00:11:38.158652924Z" level=info msg="Start event monitor" Jul 7 00:11:38.158687 containerd[1507]: time="2025-07-07T00:11:38.158670082Z" level=info msg="Start snapshots syncer" Jul 7 00:11:38.158708 containerd[1507]: time="2025-07-07T00:11:38.158687783Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:11:38.158708 containerd[1507]: time="2025-07-07T00:11:38.158693888Z" level=info msg="Start streaming server" Jul 7 00:11:38.158814 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:11:38.163906 containerd[1507]: time="2025-07-07T00:11:38.162611393Z" level=info msg="containerd successfully booted in 0.074970s" Jul 7 00:11:38.166559 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:11:38.186285 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:11:38.196464 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:11:38.201720 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:11:38.202117 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:11:38.210786 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:11:38.221112 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:11:38.230563 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:11:38.233437 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:11:38.234707 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:11:38.390117 tar[1487]: linux-amd64/README.md Jul 7 00:11:38.404593 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:11:38.631906 systemd-networkd[1399]: eth1: Gained IPv6LL Jul 7 00:11:38.634600 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:11:38.636423 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:11:38.647494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:38.649862 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:11:38.668311 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:11:39.012328 systemd-networkd[1399]: eth0: Gained IPv6LL Jul 7 00:11:39.437779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:39.439099 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:11:39.443515 systemd[1]: Startup finished in 1.192s (kernel) + 6.893s (initrd) + 3.919s (userspace) = 12.005s. Jul 7 00:11:39.455735 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:11:39.948433 kubelet[1595]: E0707 00:11:39.948374 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:11:39.950802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:11:39.950931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:11:50.201570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:11:50.206556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:11:50.295635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:11:50.299329 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:11:50.332968 kubelet[1614]: E0707 00:11:50.332916 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:11:50.335880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:11:50.336008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:12:00.587237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:12:00.600475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:12:00.708480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:12:00.722545 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:12:00.777932 kubelet[1630]: E0707 00:12:00.777856 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:12:00.782128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:12:00.782299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:12:11.032754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 00:12:11.037306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:12:11.118945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:12:11.121723 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:12:11.154640 kubelet[1645]: E0707 00:12:11.154542 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:12:11.156624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:12:11.156750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:12:21.246093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 00:12:21.251295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:12:21.336971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:12:21.348353 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:12:21.376532 kubelet[1660]: E0707 00:12:21.376465 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:12:21.378440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:12:21.378553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:12:23.461905 update_engine[1477]: I20250707 00:12:23.461808 1477 update_attempter.cc:509] Updating boot flags... Jul 7 00:12:23.492234 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1677) Jul 7 00:12:23.528523 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1678) Jul 7 00:12:23.567164 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1678) Jul 7 00:12:31.495986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 7 00:12:31.501462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:12:31.730888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:12:31.734285 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:12:31.764342 kubelet[1697]: E0707 00:12:31.764222 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:12:31.766395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:12:31.766526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:12:41.995978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 7 00:12:42.001530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:12:42.088417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:12:42.095390 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:12:42.136122 kubelet[1712]: E0707 00:12:42.136059 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:12:42.138300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:12:42.138500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:12:52.246114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 7 00:12:52.251298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:12:52.348429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:12:52.351174 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:12:52.378712 kubelet[1727]: E0707 00:12:52.378658 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:12:52.380506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:12:52.380657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:13:02.496017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 7 00:13:02.501298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:02.600894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:02.603439 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:13:02.632006 kubelet[1742]: E0707 00:13:02.631960 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:13:02.633706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:13:02.633830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:13:12.746925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jul 7 00:13:12.753825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:12.885169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:12.888012 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:13:12.920483 kubelet[1757]: E0707 00:13:12.920424 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:13:12.922735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:13:12.922880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:13:22.995970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jul 7 00:13:23.001431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:23.089740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:23.092465 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:13:23.119559 kubelet[1772]: E0707 00:13:23.119521 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:13:23.121718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:13:23.121850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:13:26.254268 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:13:26.263428 systemd[1]: Started sshd@0-65.21.182.235:22-147.75.109.163:37562.service - OpenSSH per-connection server daemon (147.75.109.163:37562). Jul 7 00:13:27.272883 sshd[1780]: Accepted publickey for core from 147.75.109.163 port 37562 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:13:27.275630 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:27.285931 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:13:27.296558 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:13:27.300486 systemd-logind[1476]: New session 1 of user core. Jul 7 00:13:27.314227 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:13:27.319565 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:13:27.325657 (systemd)[1784]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:13:27.414924 systemd[1784]: Queued start job for default target default.target. Jul 7 00:13:27.423887 systemd[1784]: Created slice app.slice - User Application Slice. Jul 7 00:13:27.423918 systemd[1784]: Reached target paths.target - Paths. Jul 7 00:13:27.423929 systemd[1784]: Reached target timers.target - Timers. Jul 7 00:13:27.424939 systemd[1784]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:13:27.434497 systemd[1784]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:13:27.434535 systemd[1784]: Reached target sockets.target - Sockets. Jul 7 00:13:27.434546 systemd[1784]: Reached target basic.target - Basic System. Jul 7 00:13:27.434573 systemd[1784]: Reached target default.target - Main User Target. Jul 7 00:13:27.434593 systemd[1784]: Startup finished in 101ms. Jul 7 00:13:27.434867 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:13:27.442304 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:13:28.148489 systemd[1]: Started sshd@1-65.21.182.235:22-147.75.109.163:37570.service - OpenSSH per-connection server daemon (147.75.109.163:37570). Jul 7 00:13:29.149766 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 37570 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:13:29.151267 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:29.155171 systemd-logind[1476]: New session 2 of user core. Jul 7 00:13:29.161272 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:13:29.844061 sshd[1795]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:29.847280 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:13:29.847924 systemd[1]: sshd@1-65.21.182.235:22-147.75.109.163:37570.service: Deactivated successfully. Jul 7 00:13:29.849574 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:13:29.850415 systemd-logind[1476]: Removed session 2. Jul 7 00:13:30.027668 systemd[1]: Started sshd@2-65.21.182.235:22-147.75.109.163:37586.service - OpenSSH per-connection server daemon (147.75.109.163:37586). Jul 7 00:13:31.046206 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 37586 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:13:31.048202 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:31.054726 systemd-logind[1476]: New session 3 of user core. Jul 7 00:13:31.061444 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:13:31.746364 sshd[1802]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:31.750174 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:13:31.750258 systemd[1]: sshd@2-65.21.182.235:22-147.75.109.163:37586.service: Deactivated successfully. Jul 7 00:13:31.752242 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:13:31.753320 systemd-logind[1476]: Removed session 3. Jul 7 00:13:31.919526 systemd[1]: Started sshd@3-65.21.182.235:22-147.75.109.163:37602.service - OpenSSH per-connection server daemon (147.75.109.163:37602). Jul 7 00:13:32.925694 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 37602 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:13:32.927711 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:32.935133 systemd-logind[1476]: New session 4 of user core. Jul 7 00:13:32.941403 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:13:33.245900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jul 7 00:13:33.253475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:33.359628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:33.362931 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:13:33.402605 kubelet[1819]: E0707 00:13:33.402428 1819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:13:33.404456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:13:33.404667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:13:33.608740 sshd[1809]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:33.612905 systemd[1]: sshd@3-65.21.182.235:22-147.75.109.163:37602.service: Deactivated successfully. Jul 7 00:13:33.615133 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:13:33.616981 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:13:33.618518 systemd-logind[1476]: Removed session 4. Jul 7 00:13:33.777475 systemd[1]: Started sshd@4-65.21.182.235:22-147.75.109.163:37612.service - OpenSSH per-connection server daemon (147.75.109.163:37612). Jul 7 00:13:34.763974 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 37612 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:13:34.765100 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:34.769283 systemd-logind[1476]: New session 5 of user core. Jul 7 00:13:34.775315 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:13:35.297601 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:13:35.297972 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:13:35.313056 sudo[1834]: pam_unix(sudo:session): session closed for user root Jul 7 00:13:35.474394 sshd[1831]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:35.478267 systemd[1]: sshd@4-65.21.182.235:22-147.75.109.163:37612.service: Deactivated successfully. Jul 7 00:13:35.479930 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:13:35.480673 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:13:35.481766 systemd-logind[1476]: Removed session 5. Jul 7 00:13:35.648796 systemd[1]: Started sshd@5-65.21.182.235:22-147.75.109.163:37618.service - OpenSSH per-connection server daemon (147.75.109.163:37618). Jul 7 00:13:36.657855 sshd[1839]: Accepted publickey for core from 147.75.109.163 port 37618 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:13:36.659109 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:36.664934 systemd-logind[1476]: New session 6 of user core. Jul 7 00:13:36.671451 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:13:37.198362 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:13:37.198837 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:13:37.203814 sudo[1843]: pam_unix(sudo:session): session closed for user root Jul 7 00:13:37.211434 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 00:13:37.211854 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:13:37.233396 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 00:13:37.234922 auditctl[1846]: No rules Jul 7 00:13:37.235392 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:13:37.235640 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 00:13:37.237509 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:13:37.258118 augenrules[1864]: No rules Jul 7 00:13:37.258609 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:13:37.259978 sudo[1842]: pam_unix(sudo:session): session closed for user root Jul 7 00:13:37.424478 sshd[1839]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:37.427504 systemd[1]: sshd@5-65.21.182.235:22-147.75.109.163:37618.service: Deactivated successfully. Jul 7 00:13:37.429313 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:13:37.430665 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:13:37.432026 systemd-logind[1476]: Removed session 6. Jul 7 00:13:37.596457 systemd[1]: Started sshd@6-65.21.182.235:22-147.75.109.163:37504.service - OpenSSH per-connection server daemon (147.75.109.163:37504). Jul 7 00:13:38.599664 sshd[1872]: Accepted publickey for core from 147.75.109.163 port 37504 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:13:38.600948 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:38.605037 systemd-logind[1476]: New session 7 of user core. Jul 7 00:13:38.614370 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:13:39.131295 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:13:39.131580 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:13:39.387512 (dockerd)[1891]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:13:39.388133 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:13:39.631368 dockerd[1891]: time="2025-07-07T00:13:39.631162108Z" level=info msg="Starting up" Jul 7 00:13:39.718975 dockerd[1891]: time="2025-07-07T00:13:39.718871299Z" level=info msg="Loading containers: start." Jul 7 00:13:39.806190 kernel: Initializing XFRM netlink socket Jul 7 00:13:39.889246 systemd-networkd[1399]: docker0: Link UP Jul 7 00:13:39.903679 dockerd[1891]: time="2025-07-07T00:13:39.903617303Z" level=info msg="Loading containers: done." Jul 7 00:13:39.918667 dockerd[1891]: time="2025-07-07T00:13:39.918608707Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:13:39.918784 dockerd[1891]: time="2025-07-07T00:13:39.918718921Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 00:13:39.918875 dockerd[1891]: time="2025-07-07T00:13:39.918842997Z" level=info msg="Daemon has completed initialization" Jul 7 00:13:39.948328 dockerd[1891]: time="2025-07-07T00:13:39.948239830Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:13:39.948744 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:13:41.086451 containerd[1507]: time="2025-07-07T00:13:41.086384654Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:13:41.693691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892878985.mount: Deactivated successfully. Jul 7 00:13:42.591499 containerd[1507]: time="2025-07-07T00:13:42.591433570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:42.594342 containerd[1507]: time="2025-07-07T00:13:42.594290570Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799139" Jul 7 00:13:42.596022 containerd[1507]: time="2025-07-07T00:13:42.595675387Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:42.597796 containerd[1507]: time="2025-07-07T00:13:42.597765639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:42.598746 containerd[1507]: time="2025-07-07T00:13:42.598715459Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.512276403s" Jul 7 00:13:42.598787 containerd[1507]: time="2025-07-07T00:13:42.598750373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 00:13:42.599780 containerd[1507]: time="2025-07-07T00:13:42.599743989Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:13:43.496273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jul 7 00:13:43.504329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:43.586298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:43.588645 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:13:43.623767 kubelet[2094]: E0707 00:13:43.623640 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:13:43.625690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:13:43.625811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:13:43.858733 containerd[1507]: time="2025-07-07T00:13:43.858588066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:43.860112 containerd[1507]: time="2025-07-07T00:13:43.860062692Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783934" Jul 7 00:13:43.861230 containerd[1507]: time="2025-07-07T00:13:43.861194206Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:43.864381 containerd[1507]: time="2025-07-07T00:13:43.864318921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:43.865179 containerd[1507]: time="2025-07-07T00:13:43.865004124Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.265230989s" Jul 7 00:13:43.865179 containerd[1507]: time="2025-07-07T00:13:43.865031526Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 00:13:43.865506 containerd[1507]: time="2025-07-07T00:13:43.865465659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:13:44.842989 containerd[1507]: time="2025-07-07T00:13:44.842933663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:44.844226 containerd[1507]: time="2025-07-07T00:13:44.844189377Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176938" Jul 7 00:13:44.845165 containerd[1507]: time="2025-07-07T00:13:44.845121253Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:44.847738 containerd[1507]: time="2025-07-07T00:13:44.847708430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:44.848915 containerd[1507]: time="2025-07-07T00:13:44.848877779Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 983.382102ms" Jul 7 00:13:44.848959 containerd[1507]: time="2025-07-07T00:13:44.848920014Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 00:13:44.849371 containerd[1507]: time="2025-07-07T00:13:44.849344979Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:13:45.841425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875707323.mount: Deactivated successfully. Jul 7 00:13:46.127477 containerd[1507]: time="2025-07-07T00:13:46.127357938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:46.128497 containerd[1507]: time="2025-07-07T00:13:46.128433417Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895391" Jul 7 00:13:46.129353 containerd[1507]: time="2025-07-07T00:13:46.129303239Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:46.130885 containerd[1507]: time="2025-07-07T00:13:46.130860133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:46.131640 containerd[1507]: time="2025-07-07T00:13:46.131334327Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.281962546s" Jul 7 00:13:46.131640 containerd[1507]: time="2025-07-07T00:13:46.131367860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 00:13:46.131829 containerd[1507]: time="2025-07-07T00:13:46.131787178Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:13:46.643581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442244348.mount: Deactivated successfully. Jul 7 00:13:47.416199 containerd[1507]: time="2025-07-07T00:13:47.416131093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:47.416986 containerd[1507]: time="2025-07-07T00:13:47.416947300Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Jul 7 00:13:47.417595 containerd[1507]: time="2025-07-07T00:13:47.417545147Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:47.421157 containerd[1507]: time="2025-07-07T00:13:47.419962599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:47.423110 containerd[1507]: time="2025-07-07T00:13:47.423087513Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.291268957s" Jul 7 00:13:47.423212 containerd[1507]: time="2025-07-07T00:13:47.423193682Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:13:47.424092 containerd[1507]: time="2025-07-07T00:13:47.424060331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:13:47.872564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272376118.mount: Deactivated successfully. Jul 7 00:13:47.878417 containerd[1507]: time="2025-07-07T00:13:47.878351719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:47.879129 containerd[1507]: time="2025-07-07T00:13:47.879082710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jul 7 00:13:47.880117 containerd[1507]: time="2025-07-07T00:13:47.880029416Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:47.882657 containerd[1507]: time="2025-07-07T00:13:47.882616869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:47.883406 containerd[1507]: time="2025-07-07T00:13:47.883255812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 459.087098ms" Jul 7 00:13:47.883406 containerd[1507]: time="2025-07-07T00:13:47.883290669Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:13:47.883989 containerd[1507]: time="2025-07-07T00:13:47.883957937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:13:48.431231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516075934.mount: Deactivated successfully. Jul 7 00:13:49.765332 containerd[1507]: time="2025-07-07T00:13:49.765270827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:49.766215 containerd[1507]: time="2025-07-07T00:13:49.766168937Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551430" Jul 7 00:13:49.767088 containerd[1507]: time="2025-07-07T00:13:49.767051711Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:49.772078 containerd[1507]: time="2025-07-07T00:13:49.771979718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:49.773705 containerd[1507]: time="2025-07-07T00:13:49.773680781Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.889692737s" Jul 7 00:13:49.773759 containerd[1507]: time="2025-07-07T00:13:49.773705842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 00:13:52.027464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:52.034396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:52.056321 systemd[1]: Reloading requested from client PID 2251 ('systemctl') (unit session-7.scope)... Jul 7 00:13:52.056339 systemd[1]: Reloading... Jul 7 00:13:52.139179 zram_generator::config[2287]: No configuration found. Jul 7 00:13:52.228061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:13:52.287513 systemd[1]: Reloading finished in 230 ms. Jul 7 00:13:52.330211 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:13:52.330276 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:13:52.330484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:52.334362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:52.416435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:52.420473 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:13:52.453935 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:13:52.453935 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:13:52.453935 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:13:52.454281 kubelet[2345]: I0707 00:13:52.453998 2345 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:13:52.678732 kubelet[2345]: I0707 00:13:52.678618 2345 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:13:52.678732 kubelet[2345]: I0707 00:13:52.678647 2345 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:13:52.678895 kubelet[2345]: I0707 00:13:52.678869 2345 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:13:52.710620 kubelet[2345]: I0707 00:13:52.710382 2345 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:13:52.713411 kubelet[2345]: E0707 00:13:52.713233 2345 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://65.21.182.235:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.21.182.235:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:52.721728 kubelet[2345]: E0707 00:13:52.721689 2345 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:13:52.721728 kubelet[2345]: I0707 00:13:52.721721 2345 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:13:52.725213 kubelet[2345]: I0707 00:13:52.725166 2345 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:13:52.727828 kubelet[2345]: I0707 00:13:52.727786 2345 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:13:52.727989 kubelet[2345]: I0707 00:13:52.727824 2345 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-f-11cbdd5b1a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:13:52.729377 kubelet[2345]: I0707 00:13:52.729345 2345 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:13:52.729377 kubelet[2345]: I0707 00:13:52.729363 2345 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:13:52.730420 kubelet[2345]: I0707 00:13:52.730389 2345 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:13:52.733475 kubelet[2345]: I0707 00:13:52.733433 2345 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:13:52.733475 kubelet[2345]: I0707 00:13:52.733478 2345 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:13:52.733629 kubelet[2345]: I0707 00:13:52.733514 2345 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:13:52.733629 kubelet[2345]: I0707 00:13:52.733524 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:13:52.739809 kubelet[2345]: I0707 00:13:52.739725 2345 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:13:52.743751 kubelet[2345]: I0707 00:13:52.743656 2345 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:13:52.744557 kubelet[2345]: W0707 00:13:52.744307 2345 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:13:52.745052 kubelet[2345]: I0707 00:13:52.745028 2345 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:13:52.745089 kubelet[2345]: I0707 00:13:52.745072 2345 server.go:1287] "Started kubelet" Jul 7 00:13:52.745331 kubelet[2345]: W0707 00:13:52.745280 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.21.182.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-f-11cbdd5b1a&limit=500&resourceVersion=0": dial tcp 65.21.182.235:6443: connect: connection refused Jul 7 00:13:52.745383 kubelet[2345]: E0707 00:13:52.745353 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.21.182.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-f-11cbdd5b1a&limit=500&resourceVersion=0\": dial tcp 65.21.182.235:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:52.746766 kubelet[2345]: W0707 00:13:52.746576 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.21.182.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.21.182.235:6443: connect: connection refused Jul 7 00:13:52.746766 kubelet[2345]: E0707 00:13:52.746624 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.21.182.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.21.182.235:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:52.746766 kubelet[2345]: I0707 00:13:52.746658 2345 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:13:52.747730 kubelet[2345]: I0707 00:13:52.747407 2345 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:13:52.751260 kubelet[2345]: I0707 00:13:52.751231 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:13:52.751770 kubelet[2345]: I0707 00:13:52.751716 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:13:52.751937 kubelet[2345]: I0707 00:13:52.751881 2345 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:13:52.754716 kubelet[2345]: E0707 00:13:52.753074 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.21.182.235:6443/api/v1/namespaces/default/events\": dial tcp 65.21.182.235:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-f-11cbdd5b1a.184fcfc6c8850b1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-f-11cbdd5b1a,UID:ci-4081-3-4-f-11cbdd5b1a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-f-11cbdd5b1a,},FirstTimestamp:2025-07-07 00:13:52.745048862 +0000 UTC m=+0.321666352,LastTimestamp:2025-07-07 00:13:52.745048862 +0000 UTC m=+0.321666352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-f-11cbdd5b1a,}" Jul 7 00:13:52.754716 kubelet[2345]: I0707 00:13:52.754458 2345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:13:52.758209 kubelet[2345]: I0707 00:13:52.758009 2345 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:13:52.758209 kubelet[2345]: E0707 00:13:52.758202 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:52.759490 kubelet[2345]: I0707 00:13:52.758900 2345 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:13:52.759490 kubelet[2345]: I0707 00:13:52.758947 2345 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:13:52.759490 kubelet[2345]: E0707 00:13:52.759013 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.182.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-f-11cbdd5b1a?timeout=10s\": dial tcp 65.21.182.235:6443: connect: connection refused" interval="200ms" Jul 7 00:13:52.759490 kubelet[2345]: W0707 00:13:52.759247 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.21.182.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.21.182.235:6443: connect: connection refused Jul 7 00:13:52.759490 kubelet[2345]: E0707 00:13:52.759271 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.21.182.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.21.182.235:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:52.760806 kubelet[2345]: I0707 00:13:52.760790 2345 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:13:52.762031 kubelet[2345]: I0707 00:13:52.761775 2345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:13:52.764463 kubelet[2345]: I0707 00:13:52.764440 2345 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:13:52.767100 kubelet[2345]: E0707 00:13:52.767085 2345 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:13:52.774993 kubelet[2345]: I0707 00:13:52.774952 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:13:52.776764 kubelet[2345]: I0707 00:13:52.776100 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:13:52.776764 kubelet[2345]: I0707 00:13:52.776122 2345 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:13:52.776764 kubelet[2345]: I0707 00:13:52.776153 2345 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:13:52.776764 kubelet[2345]: I0707 00:13:52.776159 2345 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:13:52.776764 kubelet[2345]: E0707 00:13:52.776203 2345 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:13:52.782902 kubelet[2345]: W0707 00:13:52.782797 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.21.182.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.21.182.235:6443: connect: connection refused Jul 7 00:13:52.782902 kubelet[2345]: E0707 00:13:52.782850 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.21.182.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.21.182.235:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:52.795447 kubelet[2345]: I0707 00:13:52.795338 2345 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:13:52.795447 kubelet[2345]: I0707 00:13:52.795350 2345 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:13:52.795447 kubelet[2345]: I0707 00:13:52.795385 2345 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:13:52.797461 kubelet[2345]: I0707 00:13:52.797279 2345 policy_none.go:49] "None policy: Start" Jul 7 00:13:52.797461 kubelet[2345]: I0707 00:13:52.797293 2345 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:13:52.797461 kubelet[2345]: I0707 00:13:52.797301 2345 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:13:52.802202 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:13:52.811515 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:13:52.822367 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:13:52.824105 kubelet[2345]: I0707 00:13:52.823620 2345 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:13:52.824105 kubelet[2345]: I0707 00:13:52.823785 2345 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:13:52.824105 kubelet[2345]: I0707 00:13:52.823795 2345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:13:52.825640 kubelet[2345]: E0707 00:13:52.825476 2345 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:13:52.825640 kubelet[2345]: I0707 00:13:52.825514 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:13:52.825640 kubelet[2345]: E0707 00:13:52.825561 2345 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:52.887685 systemd[1]: Created slice kubepods-burstable-podac5a9b223cda65d790a781a52e7c5776.slice - libcontainer container kubepods-burstable-podac5a9b223cda65d790a781a52e7c5776.slice. Jul 7 00:13:52.896329 kubelet[2345]: E0707 00:13:52.896286 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.898520 systemd[1]: Created slice kubepods-burstable-pod9b0e163aa2dae6a84d6e3c88c60b82a8.slice - libcontainer container kubepods-burstable-pod9b0e163aa2dae6a84d6e3c88c60b82a8.slice. Jul 7 00:13:52.906780 kubelet[2345]: E0707 00:13:52.906592 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.909959 systemd[1]: Created slice kubepods-burstable-pod399e014781f9ac11876aa3fae904887e.slice - libcontainer container kubepods-burstable-pod399e014781f9ac11876aa3fae904887e.slice. Jul 7 00:13:52.911729 kubelet[2345]: E0707 00:13:52.911696 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.926284 kubelet[2345]: I0707 00:13:52.926234 2345 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.926673 kubelet[2345]: E0707 00:13:52.926640 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.21.182.235:6443/api/v1/nodes\": dial tcp 65.21.182.235:6443: connect: connection refused" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.959582 kubelet[2345]: E0707 00:13:52.959485 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.182.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-f-11cbdd5b1a?timeout=10s\": dial tcp 65.21.182.235:6443: connect: connection refused" interval="400ms" Jul 7 00:13:52.960580 kubelet[2345]: I0707 00:13:52.960204 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac5a9b223cda65d790a781a52e7c5776-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"ac5a9b223cda65d790a781a52e7c5776\") " pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.960580 kubelet[2345]: I0707 00:13:52.960246 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac5a9b223cda65d790a781a52e7c5776-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"ac5a9b223cda65d790a781a52e7c5776\") " pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.960580 kubelet[2345]: I0707 00:13:52.960281 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac5a9b223cda65d790a781a52e7c5776-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"ac5a9b223cda65d790a781a52e7c5776\") " pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.960580 kubelet[2345]: I0707 00:13:52.960312 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.960580 kubelet[2345]: I0707 00:13:52.960340 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/399e014781f9ac11876aa3fae904887e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"399e014781f9ac11876aa3fae904887e\") " pod="kube-system/kube-scheduler-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.960866 kubelet[2345]: I0707 00:13:52.960372 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.960866 kubelet[2345]: I0707 00:13:52.960398 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.960866 kubelet[2345]: I0707 00:13:52.960424 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:52.960866 kubelet[2345]: I0707 00:13:52.960463 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:53.129111 kubelet[2345]: I0707 00:13:53.129049 2345 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:53.129510 kubelet[2345]: E0707 00:13:53.129459 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.21.182.235:6443/api/v1/nodes\": dial tcp 65.21.182.235:6443: connect: connection refused" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:53.198032 containerd[1507]: time="2025-07-07T00:13:53.197966208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-f-11cbdd5b1a,Uid:ac5a9b223cda65d790a781a52e7c5776,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:53.213514 containerd[1507]: time="2025-07-07T00:13:53.212998756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a,Uid:9b0e163aa2dae6a84d6e3c88c60b82a8,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:53.214057 containerd[1507]: time="2025-07-07T00:13:53.213630480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-f-11cbdd5b1a,Uid:399e014781f9ac11876aa3fae904887e,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:53.360428 kubelet[2345]: E0707 00:13:53.360346 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.182.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-f-11cbdd5b1a?timeout=10s\": dial tcp 65.21.182.235:6443: connect: connection refused" interval="800ms" Jul 7 00:13:53.531761 kubelet[2345]: I0707 00:13:53.531714 2345 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:53.532249 kubelet[2345]: E0707 00:13:53.532217 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.21.182.235:6443/api/v1/nodes\": dial tcp 65.21.182.235:6443: connect: connection refused" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:53.653627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403425334.mount: Deactivated successfully. Jul 7 00:13:53.658906 containerd[1507]: time="2025-07-07T00:13:53.658825990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:13:53.660574 containerd[1507]: time="2025-07-07T00:13:53.660510748Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:13:53.662391 containerd[1507]: time="2025-07-07T00:13:53.662334890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jul 7 00:13:53.663045 containerd[1507]: time="2025-07-07T00:13:53.662999968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:13:53.665172 containerd[1507]: time="2025-07-07T00:13:53.663898346Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:13:53.665473 containerd[1507]: time="2025-07-07T00:13:53.665415634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:13:53.667913 containerd[1507]: time="2025-07-07T00:13:53.667877858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:13:53.670735 containerd[1507]: time="2025-07-07T00:13:53.670686140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 456.861548ms" Jul 7 00:13:53.672230 containerd[1507]: time="2025-07-07T00:13:53.672196767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 474.143023ms" Jul 7 00:13:53.672358 containerd[1507]: time="2025-07-07T00:13:53.672319332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:13:53.673813 containerd[1507]: time="2025-07-07T00:13:53.673783512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.747237ms" Jul 7 00:13:53.728650 kubelet[2345]: W0707 00:13:53.728598 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.21.182.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.21.182.235:6443: connect: connection refused Jul 7 00:13:53.729516 kubelet[2345]: E0707 00:13:53.729459 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.21.182.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.21.182.235:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:53.782614 containerd[1507]: time="2025-07-07T00:13:53.782314239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:13:53.782614 containerd[1507]: time="2025-07-07T00:13:53.782365554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:13:53.782614 containerd[1507]: time="2025-07-07T00:13:53.782378605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:13:53.782614 containerd[1507]: time="2025-07-07T00:13:53.782450276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:13:53.784958 containerd[1507]: time="2025-07-07T00:13:53.784876218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:13:53.784958 containerd[1507]: time="2025-07-07T00:13:53.784933925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:13:53.785113 containerd[1507]: time="2025-07-07T00:13:53.785072596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:13:53.785299 containerd[1507]: time="2025-07-07T00:13:53.785239144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:13:53.791215 containerd[1507]: time="2025-07-07T00:13:53.790812955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:13:53.791215 containerd[1507]: time="2025-07-07T00:13:53.790882621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:13:53.791215 containerd[1507]: time="2025-07-07T00:13:53.790897916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:13:53.791215 containerd[1507]: time="2025-07-07T00:13:53.790979262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:13:53.806318 systemd[1]: Started cri-containerd-9a6c69518be24bba30563b918c1d564e0e1698c445010b5e10bac9818586af6e.scope - libcontainer container 9a6c69518be24bba30563b918c1d564e0e1698c445010b5e10bac9818586af6e. Jul 7 00:13:53.809750 systemd[1]: Started cri-containerd-37b77814aa021b29d9d51522e416a07a90bb8a261e24b2d383e7095706acdf80.scope - libcontainer container 37b77814aa021b29d9d51522e416a07a90bb8a261e24b2d383e7095706acdf80. Jul 7 00:13:53.812511 systemd[1]: Started cri-containerd-553c03aa015ce238d520f4e499c02cbc002ad6897d415c71cf6931b82d036739.scope - libcontainer container 553c03aa015ce238d520f4e499c02cbc002ad6897d415c71cf6931b82d036739. Jul 7 00:13:53.867576 containerd[1507]: time="2025-07-07T00:13:53.867385511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-f-11cbdd5b1a,Uid:ac5a9b223cda65d790a781a52e7c5776,Namespace:kube-system,Attempt:0,} returns sandbox id \"37b77814aa021b29d9d51522e416a07a90bb8a261e24b2d383e7095706acdf80\"" Jul 7 00:13:53.874998 containerd[1507]: time="2025-07-07T00:13:53.874953015Z" level=info msg="CreateContainer within sandbox \"37b77814aa021b29d9d51522e416a07a90bb8a261e24b2d383e7095706acdf80\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:13:53.892158 containerd[1507]: time="2025-07-07T00:13:53.892040487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-f-11cbdd5b1a,Uid:399e014781f9ac11876aa3fae904887e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a6c69518be24bba30563b918c1d564e0e1698c445010b5e10bac9818586af6e\"" Jul 7 00:13:53.895416 containerd[1507]: time="2025-07-07T00:13:53.895368405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a,Uid:9b0e163aa2dae6a84d6e3c88c60b82a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"553c03aa015ce238d520f4e499c02cbc002ad6897d415c71cf6931b82d036739\"" Jul 7 00:13:53.897188 containerd[1507]: time="2025-07-07T00:13:53.897168495Z" level=info msg="CreateContainer within sandbox \"9a6c69518be24bba30563b918c1d564e0e1698c445010b5e10bac9818586af6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:13:53.899339 containerd[1507]: time="2025-07-07T00:13:53.899188012Z" level=info msg="CreateContainer within sandbox \"37b77814aa021b29d9d51522e416a07a90bb8a261e24b2d383e7095706acdf80\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f0b703aa813a85d117dda933a3e662da7c43e45d3bdfd0ad1d9b11973d096ccf\"" Jul 7 00:13:53.899508 containerd[1507]: time="2025-07-07T00:13:53.899492378Z" level=info msg="CreateContainer within sandbox \"553c03aa015ce238d520f4e499c02cbc002ad6897d415c71cf6931b82d036739\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:13:53.899867 containerd[1507]: time="2025-07-07T00:13:53.899849515Z" level=info msg="StartContainer for \"f0b703aa813a85d117dda933a3e662da7c43e45d3bdfd0ad1d9b11973d096ccf\"" Jul 7 00:13:53.915081 containerd[1507]: time="2025-07-07T00:13:53.914978944Z" level=info msg="CreateContainer within sandbox \"9a6c69518be24bba30563b918c1d564e0e1698c445010b5e10bac9818586af6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343\"" Jul 7 00:13:53.915743 containerd[1507]: time="2025-07-07T00:13:53.915704552Z" level=info msg="StartContainer for \"04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343\"" Jul 7 00:13:53.918379 containerd[1507]: time="2025-07-07T00:13:53.918321545Z" level=info msg="CreateContainer within sandbox \"553c03aa015ce238d520f4e499c02cbc002ad6897d415c71cf6931b82d036739\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860\"" Jul 7 00:13:53.919204 containerd[1507]: time="2025-07-07T00:13:53.919056399Z" level=info msg="StartContainer for \"0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860\"" Jul 7 00:13:53.927855 systemd[1]: Started cri-containerd-f0b703aa813a85d117dda933a3e662da7c43e45d3bdfd0ad1d9b11973d096ccf.scope - libcontainer container f0b703aa813a85d117dda933a3e662da7c43e45d3bdfd0ad1d9b11973d096ccf. Jul 7 00:13:53.930067 kubelet[2345]: W0707 00:13:53.929921 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.21.182.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-f-11cbdd5b1a&limit=500&resourceVersion=0": dial tcp 65.21.182.235:6443: connect: connection refused Jul 7 00:13:53.930067 kubelet[2345]: E0707 00:13:53.929993 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.21.182.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-f-11cbdd5b1a&limit=500&resourceVersion=0\": dial tcp 65.21.182.235:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:53.952580 systemd[1]: Started cri-containerd-0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860.scope - libcontainer container 0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860. Jul 7 00:13:53.955791 systemd[1]: Started cri-containerd-04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343.scope - libcontainer container 04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343. Jul 7 00:13:53.990448 kubelet[2345]: W0707 00:13:53.989569 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.21.182.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.21.182.235:6443: connect: connection refused Jul 7 00:13:53.990448 kubelet[2345]: E0707 00:13:53.989628 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.21.182.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.21.182.235:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:54.006067 containerd[1507]: time="2025-07-07T00:13:54.006021927Z" level=info msg="StartContainer for \"f0b703aa813a85d117dda933a3e662da7c43e45d3bdfd0ad1d9b11973d096ccf\" returns successfully" Jul 7 00:13:54.006252 containerd[1507]: time="2025-07-07T00:13:54.006125391Z" level=info msg="StartContainer for \"04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343\" returns successfully" Jul 7 00:13:54.022962 containerd[1507]: time="2025-07-07T00:13:54.022930967Z" level=info msg="StartContainer for \"0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860\" returns successfully" Jul 7 00:13:54.162033 kubelet[2345]: E0707 00:13:54.161688 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.21.182.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-f-11cbdd5b1a?timeout=10s\": dial tcp 65.21.182.235:6443: connect: connection refused" interval="1.6s" Jul 7 00:13:54.334491 kubelet[2345]: I0707 00:13:54.334461 2345 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:54.802994 kubelet[2345]: E0707 00:13:54.802952 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:54.805593 kubelet[2345]: E0707 00:13:54.805518 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:54.806325 kubelet[2345]: E0707 00:13:54.806301 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:55.453274 kubelet[2345]: I0707 00:13:55.453226 2345 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:55.453274 kubelet[2345]: E0707 00:13:55.453257 2345 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-f-11cbdd5b1a\": node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:55.467527 kubelet[2345]: E0707 00:13:55.467491 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:55.569194 kubelet[2345]: E0707 00:13:55.569138 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:55.670040 kubelet[2345]: E0707 00:13:55.669977 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:55.770609 kubelet[2345]: E0707 00:13:55.770548 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:55.809339 kubelet[2345]: E0707 00:13:55.809047 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:55.809339 kubelet[2345]: E0707 00:13:55.809171 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:55.810265 kubelet[2345]: E0707 00:13:55.810231 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:55.870958 kubelet[2345]: E0707 00:13:55.870902 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:55.971525 kubelet[2345]: E0707 00:13:55.971469 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:56.072257 kubelet[2345]: E0707 00:13:56.072091 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:56.173070 kubelet[2345]: E0707 00:13:56.173018 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:56.273904 kubelet[2345]: E0707 00:13:56.273857 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:56.374734 kubelet[2345]: E0707 00:13:56.374525 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:56.475302 kubelet[2345]: E0707 00:13:56.475226 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:56.575940 kubelet[2345]: E0707 00:13:56.575900 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:56.676599 kubelet[2345]: E0707 00:13:56.676484 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-f-11cbdd5b1a\" not found" Jul 7 00:13:56.758805 kubelet[2345]: I0707 00:13:56.758762 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:56.770037 kubelet[2345]: I0707 00:13:56.770002 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:56.780868 kubelet[2345]: I0707 00:13:56.780759 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:56.808742 kubelet[2345]: I0707 00:13:56.808701 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:56.813179 kubelet[2345]: E0707 00:13:56.813090 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-f-11cbdd5b1a\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:57.366187 systemd[1]: Reloading requested from client PID 2621 ('systemctl') (unit session-7.scope)... Jul 7 00:13:57.366204 systemd[1]: Reloading... Jul 7 00:13:57.423175 zram_generator::config[2661]: No configuration found. Jul 7 00:13:57.511066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:13:57.583091 systemd[1]: Reloading finished in 216 ms. Jul 7 00:13:57.616178 kubelet[2345]: I0707 00:13:57.616115 2345 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:13:57.616289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:57.640241 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:13:57.640411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:57.644401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:57.769124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:57.772268 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:13:57.811474 kubelet[2712]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:13:57.811850 kubelet[2712]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:13:57.811889 kubelet[2712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:13:57.812110 kubelet[2712]: I0707 00:13:57.812084 2712 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:13:57.819025 kubelet[2712]: I0707 00:13:57.818984 2712 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:13:57.819025 kubelet[2712]: I0707 00:13:57.819005 2712 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:13:57.819231 kubelet[2712]: I0707 00:13:57.819210 2712 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:13:57.821102 kubelet[2712]: I0707 00:13:57.821078 2712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:13:57.822983 kubelet[2712]: I0707 00:13:57.822695 2712 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:13:57.827230 kubelet[2712]: E0707 00:13:57.827181 2712 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:13:57.827230 kubelet[2712]: I0707 00:13:57.827221 2712 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:13:57.830095 kubelet[2712]: I0707 00:13:57.830072 2712 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:13:57.830372 kubelet[2712]: I0707 00:13:57.830337 2712 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:13:57.830572 kubelet[2712]: I0707 00:13:57.830372 2712 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-f-11cbdd5b1a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:13:57.830656 kubelet[2712]: I0707 00:13:57.830580 2712 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:13:57.830656 kubelet[2712]: I0707 00:13:57.830613 2712 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:13:57.830697 kubelet[2712]: I0707 00:13:57.830657 2712 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:13:57.832269 kubelet[2712]: I0707 00:13:57.831740 2712 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:13:57.832269 kubelet[2712]: I0707 00:13:57.831771 2712 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:13:57.832269 kubelet[2712]: I0707 00:13:57.831815 2712 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:13:57.832269 kubelet[2712]: I0707 00:13:57.831827 2712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:13:57.834679 kubelet[2712]: I0707 00:13:57.834648 2712 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:13:57.835162 kubelet[2712]: I0707 00:13:57.834992 2712 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:13:57.835562 kubelet[2712]: I0707 00:13:57.835541 2712 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:13:57.835599 kubelet[2712]: I0707 00:13:57.835584 2712 server.go:1287] "Started kubelet" Jul 7 00:13:57.847431 kubelet[2712]: I0707 00:13:57.847405 2712 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:13:57.854166 kubelet[2712]: I0707 00:13:57.852813 2712 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:13:57.854166 kubelet[2712]: I0707 00:13:57.853133 2712 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:13:57.854166 kubelet[2712]: I0707 00:13:57.853794 2712 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:13:57.856598 kubelet[2712]: E0707 00:13:57.856584 2712 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:13:57.856819 kubelet[2712]: I0707 00:13:57.856800 2712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:13:57.857511 kubelet[2712]: I0707 00:13:57.857319 2712 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:13:57.860394 kubelet[2712]: I0707 00:13:57.859843 2712 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:13:57.860394 kubelet[2712]: I0707 00:13:57.859945 2712 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:13:57.860394 kubelet[2712]: I0707 00:13:57.860045 2712 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:13:57.861460 kubelet[2712]: I0707 00:13:57.861443 2712 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:13:57.861564 kubelet[2712]: I0707 00:13:57.861541 2712 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:13:57.862710 kubelet[2712]: I0707 00:13:57.862686 2712 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:13:57.868776 kubelet[2712]: I0707 00:13:57.867911 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:13:57.869298 kubelet[2712]: I0707 00:13:57.869275 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:13:57.869337 kubelet[2712]: I0707 00:13:57.869301 2712 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:13:57.869337 kubelet[2712]: I0707 00:13:57.869315 2712 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:13:57.869337 kubelet[2712]: I0707 00:13:57.869322 2712 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:13:57.870651 kubelet[2712]: E0707 00:13:57.870619 2712 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:13:57.901628 kubelet[2712]: I0707 00:13:57.901609 2712 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:13:57.901628 kubelet[2712]: I0707 00:13:57.901621 2712 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:13:57.901628 kubelet[2712]: I0707 00:13:57.901634 2712 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:13:57.901785 kubelet[2712]: I0707 00:13:57.901742 2712 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:13:57.901785 kubelet[2712]: I0707 00:13:57.901751 2712 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:13:57.901785 kubelet[2712]: I0707 00:13:57.901764 2712 policy_none.go:49] "None policy: Start" Jul 7 00:13:57.901785 kubelet[2712]: I0707 00:13:57.901771 2712 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:13:57.901785 kubelet[2712]: I0707 00:13:57.901778 2712 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:13:57.901865 kubelet[2712]: I0707 00:13:57.901848 2712 state_mem.go:75] "Updated machine memory state" Jul 7 00:13:57.905422 kubelet[2712]: I0707 00:13:57.904901 2712 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:13:57.905422 kubelet[2712]: I0707 00:13:57.905011 2712 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:13:57.905422 kubelet[2712]: I0707 00:13:57.905019 2712 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:13:57.905422 kubelet[2712]: I0707 00:13:57.905209 2712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:13:57.907055 kubelet[2712]: E0707 00:13:57.906862 2712 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:13:57.971360 kubelet[2712]: I0707 00:13:57.971314 2712 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:57.972335 kubelet[2712]: I0707 00:13:57.972299 2712 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:57.972604 kubelet[2712]: I0707 00:13:57.972589 2712 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:57.977520 kubelet[2712]: E0707 00:13:57.977486 2712 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-f-11cbdd5b1a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:57.978771 kubelet[2712]: E0707 00:13:57.978657 2712 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:57.978771 kubelet[2712]: E0707 00:13:57.978766 2712 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-f-11cbdd5b1a\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.012496 kubelet[2712]: I0707 00:13:58.012435 2712 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.019702 kubelet[2712]: I0707 00:13:58.019661 2712 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.019812 kubelet[2712]: I0707 00:13:58.019728 2712 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061571 kubelet[2712]: I0707 00:13:58.061499 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061571 kubelet[2712]: I0707 00:13:58.061538 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061571 kubelet[2712]: I0707 00:13:58.061556 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061571 kubelet[2712]: I0707 00:13:58.061572 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac5a9b223cda65d790a781a52e7c5776-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"ac5a9b223cda65d790a781a52e7c5776\") " pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061571 kubelet[2712]: I0707 00:13:58.061585 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac5a9b223cda65d790a781a52e7c5776-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"ac5a9b223cda65d790a781a52e7c5776\") " pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061779 kubelet[2712]: I0707 00:13:58.061597 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac5a9b223cda65d790a781a52e7c5776-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"ac5a9b223cda65d790a781a52e7c5776\") " pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061779 kubelet[2712]: I0707 00:13:58.061609 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061779 kubelet[2712]: I0707 00:13:58.061622 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b0e163aa2dae6a84d6e3c88c60b82a8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"9b0e163aa2dae6a84d6e3c88c60b82a8\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.061779 kubelet[2712]: I0707 00:13:58.061635 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/399e014781f9ac11876aa3fae904887e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-f-11cbdd5b1a\" (UID: \"399e014781f9ac11876aa3fae904887e\") " pod="kube-system/kube-scheduler-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.834522 kubelet[2712]: I0707 00:13:58.834289 2712 apiserver.go:52] "Watching apiserver" Jul 7 00:13:58.860129 kubelet[2712]: I0707 00:13:58.860045 2712 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:13:58.888230 kubelet[2712]: I0707 00:13:58.886417 2712 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.893229 kubelet[2712]: E0707 00:13:58.893196 2712 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-f-11cbdd5b1a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:13:58.924289 kubelet[2712]: I0707 00:13:58.924219 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-f-11cbdd5b1a" podStartSLOduration=2.924206861 podStartE2EDuration="2.924206861s" podCreationTimestamp="2025-07-07 00:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:13:58.912200769 +0000 UTC m=+1.135513797" watchObservedRunningTime="2025-07-07 00:13:58.924206861 +0000 UTC m=+1.147519888" Jul 7 00:13:58.937040 kubelet[2712]: I0707 00:13:58.936997 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-f-11cbdd5b1a" podStartSLOduration=2.9369850250000002 podStartE2EDuration="2.936985025s" podCreationTimestamp="2025-07-07 00:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:13:58.924500415 +0000 UTC m=+1.147813441" watchObservedRunningTime="2025-07-07 00:13:58.936985025 +0000 UTC m=+1.160298052" Jul 7 00:13:58.944697 kubelet[2712]: I0707 00:13:58.944650 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-f-11cbdd5b1a" podStartSLOduration=2.944639444 podStartE2EDuration="2.944639444s" podCreationTimestamp="2025-07-07 00:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:13:58.937130794 +0000 UTC m=+1.160443821" watchObservedRunningTime="2025-07-07 00:13:58.944639444 +0000 UTC m=+1.167952471" Jul 7 00:14:02.677881 kubelet[2712]: I0707 00:14:02.677831 2712 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:14:02.678877 containerd[1507]: time="2025-07-07T00:14:02.678613260Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:14:02.679080 kubelet[2712]: I0707 00:14:02.678816 2712 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:14:03.319845 systemd[1]: Created slice kubepods-besteffort-pod2c63eb45_81cb_4ac6_939c_a406c05fac32.slice - libcontainer container kubepods-besteffort-pod2c63eb45_81cb_4ac6_939c_a406c05fac32.slice. Jul 7 00:14:03.395577 kubelet[2712]: I0707 00:14:03.395484 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c63eb45-81cb-4ac6-939c-a406c05fac32-kube-proxy\") pod \"kube-proxy-tgp5x\" (UID: \"2c63eb45-81cb-4ac6-939c-a406c05fac32\") " pod="kube-system/kube-proxy-tgp5x" Jul 7 00:14:03.395577 kubelet[2712]: I0707 00:14:03.395535 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c63eb45-81cb-4ac6-939c-a406c05fac32-xtables-lock\") pod \"kube-proxy-tgp5x\" (UID: \"2c63eb45-81cb-4ac6-939c-a406c05fac32\") " pod="kube-system/kube-proxy-tgp5x" Jul 7 00:14:03.395851 kubelet[2712]: I0707 00:14:03.395781 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c63eb45-81cb-4ac6-939c-a406c05fac32-lib-modules\") pod \"kube-proxy-tgp5x\" (UID: \"2c63eb45-81cb-4ac6-939c-a406c05fac32\") " pod="kube-system/kube-proxy-tgp5x" Jul 7 00:14:03.395851 kubelet[2712]: I0707 00:14:03.395805 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrw58\" (UniqueName: \"kubernetes.io/projected/2c63eb45-81cb-4ac6-939c-a406c05fac32-kube-api-access-jrw58\") pod \"kube-proxy-tgp5x\" (UID: \"2c63eb45-81cb-4ac6-939c-a406c05fac32\") " pod="kube-system/kube-proxy-tgp5x" Jul 7 00:14:03.627876 containerd[1507]: time="2025-07-07T00:14:03.627772463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tgp5x,Uid:2c63eb45-81cb-4ac6-939c-a406c05fac32,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:03.653705 containerd[1507]: time="2025-07-07T00:14:03.653320994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:03.653705 containerd[1507]: time="2025-07-07T00:14:03.653403038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:03.653705 containerd[1507]: time="2025-07-07T00:14:03.653419026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:03.654116 containerd[1507]: time="2025-07-07T00:14:03.653685352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:03.678290 systemd[1]: Started cri-containerd-71ac7ae86b3fc234ed9677f3ce0ff842b6767b4efe3beefc8013458bbcf6ddb0.scope - libcontainer container 71ac7ae86b3fc234ed9677f3ce0ff842b6767b4efe3beefc8013458bbcf6ddb0. Jul 7 00:14:03.706025 containerd[1507]: time="2025-07-07T00:14:03.705971327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tgp5x,Uid:2c63eb45-81cb-4ac6-939c-a406c05fac32,Namespace:kube-system,Attempt:0,} returns sandbox id \"71ac7ae86b3fc234ed9677f3ce0ff842b6767b4efe3beefc8013458bbcf6ddb0\"" Jul 7 00:14:03.709668 containerd[1507]: time="2025-07-07T00:14:03.709547111Z" level=info msg="CreateContainer within sandbox \"71ac7ae86b3fc234ed9677f3ce0ff842b6767b4efe3beefc8013458bbcf6ddb0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:14:03.727888 containerd[1507]: time="2025-07-07T00:14:03.727795595Z" level=info msg="CreateContainer within sandbox \"71ac7ae86b3fc234ed9677f3ce0ff842b6767b4efe3beefc8013458bbcf6ddb0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8e7b109bcb076c3f0629adc23f909cac8dd0bc8534186c54e9465e47f819545\"" Jul 7 00:14:03.728660 containerd[1507]: time="2025-07-07T00:14:03.728555376Z" level=info msg="StartContainer for \"f8e7b109bcb076c3f0629adc23f909cac8dd0bc8534186c54e9465e47f819545\"" Jul 7 00:14:03.754688 systemd[1]: Started cri-containerd-f8e7b109bcb076c3f0629adc23f909cac8dd0bc8534186c54e9465e47f819545.scope - libcontainer container f8e7b109bcb076c3f0629adc23f909cac8dd0bc8534186c54e9465e47f819545. Jul 7 00:14:03.786697 containerd[1507]: time="2025-07-07T00:14:03.786628657Z" level=info msg="StartContainer for \"f8e7b109bcb076c3f0629adc23f909cac8dd0bc8534186c54e9465e47f819545\" returns successfully" Jul 7 00:14:03.800990 kubelet[2712]: I0707 00:14:03.800392 2712 status_manager.go:890] "Failed to get status for pod" podUID="974b455e-12d9-4fc6-a2c4-eed4d51f9e72" pod="tigera-operator/tigera-operator-747864d56d-k9sjm" err="pods \"tigera-operator-747864d56d-k9sjm\" is forbidden: User \"system:node:ci-4081-3-4-f-11cbdd5b1a\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object" Jul 7 00:14:03.802293 kubelet[2712]: W0707 00:14:03.802091 2712 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-4-f-11cbdd5b1a" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object Jul 7 00:14:03.802293 kubelet[2712]: W0707 00:14:03.802109 2712 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-3-4-f-11cbdd5b1a" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object Jul 7 00:14:03.802293 kubelet[2712]: E0707 00:14:03.802160 2712 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081-3-4-f-11cbdd5b1a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object" logger="UnhandledError" Jul 7 00:14:03.802293 kubelet[2712]: E0707 00:14:03.802160 2712 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-4-f-11cbdd5b1a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object" logger="UnhandledError" Jul 7 00:14:03.804659 systemd[1]: Created slice kubepods-besteffort-pod974b455e_12d9_4fc6_a2c4_eed4d51f9e72.slice - libcontainer container kubepods-besteffort-pod974b455e_12d9_4fc6_a2c4_eed4d51f9e72.slice. Jul 7 00:14:03.899919 kubelet[2712]: I0707 00:14:03.899805 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/974b455e-12d9-4fc6-a2c4-eed4d51f9e72-var-lib-calico\") pod \"tigera-operator-747864d56d-k9sjm\" (UID: \"974b455e-12d9-4fc6-a2c4-eed4d51f9e72\") " pod="tigera-operator/tigera-operator-747864d56d-k9sjm" Jul 7 00:14:03.899919 kubelet[2712]: I0707 00:14:03.899845 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmzm\" (UniqueName: \"kubernetes.io/projected/974b455e-12d9-4fc6-a2c4-eed4d51f9e72-kube-api-access-tbmzm\") pod \"tigera-operator-747864d56d-k9sjm\" (UID: \"974b455e-12d9-4fc6-a2c4-eed4d51f9e72\") " pod="tigera-operator/tigera-operator-747864d56d-k9sjm" Jul 7 00:14:03.906830 kubelet[2712]: I0707 00:14:03.906777 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tgp5x" podStartSLOduration=0.906762807 podStartE2EDuration="906.762807ms" podCreationTimestamp="2025-07-07 00:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:03.906542842 +0000 UTC m=+6.129855868" watchObservedRunningTime="2025-07-07 00:14:03.906762807 +0000 UTC m=+6.130075834" Jul 7 00:14:05.008110 kubelet[2712]: E0707 00:14:05.008048 2712 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 7 00:14:05.008110 kubelet[2712]: E0707 00:14:05.008104 2712 projected.go:194] Error preparing data for projected volume kube-api-access-tbmzm for pod tigera-operator/tigera-operator-747864d56d-k9sjm: failed to sync configmap cache: timed out waiting for the condition Jul 7 00:14:05.008544 kubelet[2712]: E0707 00:14:05.008212 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/974b455e-12d9-4fc6-a2c4-eed4d51f9e72-kube-api-access-tbmzm podName:974b455e-12d9-4fc6-a2c4-eed4d51f9e72 nodeName:}" failed. No retries permitted until 2025-07-07 00:14:05.508182027 +0000 UTC m=+7.731495064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tbmzm" (UniqueName: "kubernetes.io/projected/974b455e-12d9-4fc6-a2c4-eed4d51f9e72-kube-api-access-tbmzm") pod "tigera-operator-747864d56d-k9sjm" (UID: "974b455e-12d9-4fc6-a2c4-eed4d51f9e72") : failed to sync configmap cache: timed out waiting for the condition Jul 7 00:14:05.607260 containerd[1507]: time="2025-07-07T00:14:05.607133296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-k9sjm,Uid:974b455e-12d9-4fc6-a2c4-eed4d51f9e72,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:14:05.637930 containerd[1507]: time="2025-07-07T00:14:05.637582607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:05.637930 containerd[1507]: time="2025-07-07T00:14:05.637699112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:05.637930 containerd[1507]: time="2025-07-07T00:14:05.637716594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:05.637930 containerd[1507]: time="2025-07-07T00:14:05.637798258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:05.667262 systemd[1]: Started cri-containerd-72034c9e1e75c55c5d01556db98e35295b52c9f2d438bce5dba4aa892b54ff34.scope - libcontainer container 72034c9e1e75c55c5d01556db98e35295b52c9f2d438bce5dba4aa892b54ff34. Jul 7 00:14:05.697628 containerd[1507]: time="2025-07-07T00:14:05.697582260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-k9sjm,Uid:974b455e-12d9-4fc6-a2c4-eed4d51f9e72,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"72034c9e1e75c55c5d01556db98e35295b52c9f2d438bce5dba4aa892b54ff34\"" Jul 7 00:14:05.699343 containerd[1507]: time="2025-07-07T00:14:05.699319858Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:14:07.389086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893708422.mount: Deactivated successfully. Jul 7 00:14:07.771333 containerd[1507]: time="2025-07-07T00:14:07.771285477Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:07.772347 containerd[1507]: time="2025-07-07T00:14:07.772300211Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:14:07.772784 containerd[1507]: time="2025-07-07T00:14:07.772736618Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:07.774524 containerd[1507]: time="2025-07-07T00:14:07.774505465Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:07.775406 containerd[1507]: time="2025-07-07T00:14:07.775028835Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.075680387s" Jul 7 00:14:07.775406 containerd[1507]: time="2025-07-07T00:14:07.775058217Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:14:07.777260 containerd[1507]: time="2025-07-07T00:14:07.777228858Z" level=info msg="CreateContainer within sandbox \"72034c9e1e75c55c5d01556db98e35295b52c9f2d438bce5dba4aa892b54ff34\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:14:07.791661 containerd[1507]: time="2025-07-07T00:14:07.791492128Z" level=info msg="CreateContainer within sandbox \"72034c9e1e75c55c5d01556db98e35295b52c9f2d438bce5dba4aa892b54ff34\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10\"" Jul 7 00:14:07.792345 containerd[1507]: time="2025-07-07T00:14:07.792276113Z" level=info msg="StartContainer for \"99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10\"" Jul 7 00:14:07.812285 systemd[1]: Started cri-containerd-99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10.scope - libcontainer container 99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10. Jul 7 00:14:07.831125 containerd[1507]: time="2025-07-07T00:14:07.831080121Z" level=info msg="StartContainer for \"99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10\" returns successfully" Jul 7 00:14:13.553100 sudo[1875]: pam_unix(sudo:session): session closed for user root Jul 7 00:14:13.718009 sshd[1872]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:13.720787 systemd[1]: sshd@6-65.21.182.235:22-147.75.109.163:37504.service: Deactivated successfully. Jul 7 00:14:13.722965 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:14:13.723526 systemd[1]: session-7.scope: Consumed 3.740s CPU time, 141.9M memory peak, 0B memory swap peak. Jul 7 00:14:13.724941 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:14:13.726700 systemd-logind[1476]: Removed session 7. Jul 7 00:14:16.052245 kubelet[2712]: I0707 00:14:16.052176 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-k9sjm" podStartSLOduration=10.974491925 podStartE2EDuration="13.051457689s" podCreationTimestamp="2025-07-07 00:14:03 +0000 UTC" firstStartedPulling="2025-07-07 00:14:05.698718617 +0000 UTC m=+7.922031644" lastFinishedPulling="2025-07-07 00:14:07.775684381 +0000 UTC m=+9.998997408" observedRunningTime="2025-07-07 00:14:07.935799713 +0000 UTC m=+10.159112760" watchObservedRunningTime="2025-07-07 00:14:16.051457689 +0000 UTC m=+18.274770716" Jul 7 00:14:16.055904 kubelet[2712]: I0707 00:14:16.055870 2712 status_manager.go:890] "Failed to get status for pod" podUID="a71f84a6-9346-4643-9e48-b8820dceeb54" pod="calico-system/calico-typha-554c46f976-v4pqw" err="pods \"calico-typha-554c46f976-v4pqw\" is forbidden: User \"system:node:ci-4081-3-4-f-11cbdd5b1a\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object" Jul 7 00:14:16.057165 kubelet[2712]: W0707 00:14:16.056726 2712 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081-3-4-f-11cbdd5b1a" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object Jul 7 00:14:16.057165 kubelet[2712]: E0707 00:14:16.056756 2712 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-4-f-11cbdd5b1a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object" logger="UnhandledError" Jul 7 00:14:16.057165 kubelet[2712]: W0707 00:14:16.056794 2712 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-4-f-11cbdd5b1a" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object Jul 7 00:14:16.057165 kubelet[2712]: E0707 00:14:16.056802 2712 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-4-f-11cbdd5b1a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-4-f-11cbdd5b1a' and this object" logger="UnhandledError" Jul 7 00:14:16.061540 systemd[1]: Created slice kubepods-besteffort-poda71f84a6_9346_4643_9e48_b8820dceeb54.slice - libcontainer container kubepods-besteffort-poda71f84a6_9346_4643_9e48_b8820dceeb54.slice. Jul 7 00:14:16.081587 kubelet[2712]: I0707 00:14:16.081452 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a71f84a6-9346-4643-9e48-b8820dceeb54-typha-certs\") pod \"calico-typha-554c46f976-v4pqw\" (UID: \"a71f84a6-9346-4643-9e48-b8820dceeb54\") " pod="calico-system/calico-typha-554c46f976-v4pqw" Jul 7 00:14:16.081587 kubelet[2712]: I0707 00:14:16.081489 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6m8\" (UniqueName: \"kubernetes.io/projected/a71f84a6-9346-4643-9e48-b8820dceeb54-kube-api-access-9m6m8\") pod \"calico-typha-554c46f976-v4pqw\" (UID: \"a71f84a6-9346-4643-9e48-b8820dceeb54\") " pod="calico-system/calico-typha-554c46f976-v4pqw" Jul 7 00:14:16.081587 kubelet[2712]: I0707 00:14:16.081507 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71f84a6-9346-4643-9e48-b8820dceeb54-tigera-ca-bundle\") pod \"calico-typha-554c46f976-v4pqw\" (UID: \"a71f84a6-9346-4643-9e48-b8820dceeb54\") " pod="calico-system/calico-typha-554c46f976-v4pqw" Jul 7 00:14:16.147340 systemd[1]: Created slice kubepods-besteffort-podc7476baf_b6ee_47d6_999f_3139f751746b.slice - libcontainer container kubepods-besteffort-podc7476baf_b6ee_47d6_999f_3139f751746b.slice. Jul 7 00:14:16.181919 kubelet[2712]: I0707 00:14:16.181885 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-cni-bin-dir\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182174 kubelet[2712]: I0707 00:14:16.182158 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-var-lib-calico\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182389 kubelet[2712]: I0707 00:14:16.182270 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-lib-modules\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182389 kubelet[2712]: I0707 00:14:16.182300 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-var-run-calico\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182389 kubelet[2712]: I0707 00:14:16.182326 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-cni-log-dir\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182389 kubelet[2712]: I0707 00:14:16.182340 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-xtables-lock\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182389 kubelet[2712]: I0707 00:14:16.182362 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c7476baf-b6ee-47d6-999f-3139f751746b-node-certs\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182498 kubelet[2712]: I0707 00:14:16.182375 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-cni-net-dir\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182762 kubelet[2712]: I0707 00:14:16.182558 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-policysync\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182762 kubelet[2712]: I0707 00:14:16.182576 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c7476baf-b6ee-47d6-999f-3139f751746b-flexvol-driver-host\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182762 kubelet[2712]: I0707 00:14:16.182590 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7476baf-b6ee-47d6-999f-3139f751746b-tigera-ca-bundle\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.182762 kubelet[2712]: I0707 00:14:16.182605 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tnks\" (UniqueName: \"kubernetes.io/projected/c7476baf-b6ee-47d6-999f-3139f751746b-kube-api-access-8tnks\") pod \"calico-node-q8gfm\" (UID: \"c7476baf-b6ee-47d6-999f-3139f751746b\") " pod="calico-system/calico-node-q8gfm" Jul 7 00:14:16.270776 kubelet[2712]: E0707 00:14:16.270687 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qbjk" podUID="2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6" Jul 7 00:14:16.282827 kubelet[2712]: I0707 00:14:16.282785 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6-socket-dir\") pod \"csi-node-driver-5qbjk\" (UID: \"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6\") " pod="calico-system/csi-node-driver-5qbjk" Jul 7 00:14:16.282827 kubelet[2712]: I0707 00:14:16.282838 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6-registration-dir\") pod \"csi-node-driver-5qbjk\" (UID: \"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6\") " pod="calico-system/csi-node-driver-5qbjk" Jul 7 00:14:16.283104 kubelet[2712]: I0707 00:14:16.282851 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6-kubelet-dir\") pod \"csi-node-driver-5qbjk\" (UID: \"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6\") " pod="calico-system/csi-node-driver-5qbjk" Jul 7 00:14:16.283104 kubelet[2712]: I0707 00:14:16.282867 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5z44\" (UniqueName: \"kubernetes.io/projected/2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6-kube-api-access-t5z44\") pod \"csi-node-driver-5qbjk\" (UID: \"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6\") " pod="calico-system/csi-node-driver-5qbjk" Jul 7 00:14:16.283104 kubelet[2712]: I0707 00:14:16.282926 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6-varrun\") pod \"csi-node-driver-5qbjk\" (UID: \"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6\") " pod="calico-system/csi-node-driver-5qbjk" Jul 7 00:14:16.292913 kubelet[2712]: E0707 00:14:16.292872 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.292913 kubelet[2712]: W0707 00:14:16.292889 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.293450 kubelet[2712]: E0707 00:14:16.293430 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.383653 kubelet[2712]: E0707 00:14:16.383556 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.383653 kubelet[2712]: W0707 00:14:16.383577 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.383653 kubelet[2712]: E0707 00:14:16.383597 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.384455 kubelet[2712]: E0707 00:14:16.383827 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.384455 kubelet[2712]: W0707 00:14:16.383838 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.384455 kubelet[2712]: E0707 00:14:16.383858 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.384455 kubelet[2712]: E0707 00:14:16.384053 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.384455 kubelet[2712]: W0707 00:14:16.384059 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.384455 kubelet[2712]: E0707 00:14:16.384077 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.384455 kubelet[2712]: E0707 00:14:16.384280 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.384455 kubelet[2712]: W0707 00:14:16.384287 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.384455 kubelet[2712]: E0707 00:14:16.384307 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.384661 kubelet[2712]: E0707 00:14:16.384515 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.384661 kubelet[2712]: W0707 00:14:16.384522 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.384661 kubelet[2712]: E0707 00:14:16.384539 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.384713 kubelet[2712]: E0707 00:14:16.384698 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.384713 kubelet[2712]: W0707 00:14:16.384705 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.384748 kubelet[2712]: E0707 00:14:16.384714 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.384872 kubelet[2712]: E0707 00:14:16.384854 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.384872 kubelet[2712]: W0707 00:14:16.384866 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.384953 kubelet[2712]: E0707 00:14:16.384925 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.385035 kubelet[2712]: E0707 00:14:16.385007 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.385035 kubelet[2712]: W0707 00:14:16.385015 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.385127 kubelet[2712]: E0707 00:14:16.385105 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.385221 kubelet[2712]: E0707 00:14:16.385202 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.385221 kubelet[2712]: W0707 00:14:16.385213 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.385331 kubelet[2712]: E0707 00:14:16.385275 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.385365 kubelet[2712]: E0707 00:14:16.385352 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.385365 kubelet[2712]: W0707 00:14:16.385360 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.385403 kubelet[2712]: E0707 00:14:16.385368 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.385548 kubelet[2712]: E0707 00:14:16.385525 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.385548 kubelet[2712]: W0707 00:14:16.385538 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.385548 kubelet[2712]: E0707 00:14:16.385548 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.385711 kubelet[2712]: E0707 00:14:16.385692 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.385711 kubelet[2712]: W0707 00:14:16.385704 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.385808 kubelet[2712]: E0707 00:14:16.385759 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.385839 kubelet[2712]: E0707 00:14:16.385834 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.385856 kubelet[2712]: W0707 00:14:16.385840 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.385955 kubelet[2712]: E0707 00:14:16.385899 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.385996 kubelet[2712]: E0707 00:14:16.385977 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.385996 kubelet[2712]: W0707 00:14:16.385984 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.385996 kubelet[2712]: E0707 00:14:16.385992 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.386214 kubelet[2712]: E0707 00:14:16.386195 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.386214 kubelet[2712]: W0707 00:14:16.386207 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.386214 kubelet[2712]: E0707 00:14:16.386215 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.386341 kubelet[2712]: E0707 00:14:16.386324 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.386341 kubelet[2712]: W0707 00:14:16.386336 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.386386 kubelet[2712]: E0707 00:14:16.386342 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.386628 kubelet[2712]: E0707 00:14:16.386611 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.386628 kubelet[2712]: W0707 00:14:16.386622 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.386628 kubelet[2712]: E0707 00:14:16.386629 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.386826 kubelet[2712]: E0707 00:14:16.386807 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.386864 kubelet[2712]: W0707 00:14:16.386819 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.386864 kubelet[2712]: E0707 00:14:16.386843 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.387018 kubelet[2712]: E0707 00:14:16.386998 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.387018 kubelet[2712]: W0707 00:14:16.387010 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.387064 kubelet[2712]: E0707 00:14:16.387021 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.387778 kubelet[2712]: E0707 00:14:16.387757 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.387778 kubelet[2712]: W0707 00:14:16.387770 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.387843 kubelet[2712]: E0707 00:14:16.387791 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.387989 kubelet[2712]: E0707 00:14:16.387965 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.387989 kubelet[2712]: W0707 00:14:16.387977 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.388039 kubelet[2712]: E0707 00:14:16.388020 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.388489 kubelet[2712]: E0707 00:14:16.388227 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.388489 kubelet[2712]: W0707 00:14:16.388236 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.388489 kubelet[2712]: E0707 00:14:16.388253 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.388489 kubelet[2712]: E0707 00:14:16.388439 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.388489 kubelet[2712]: W0707 00:14:16.388446 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.388489 kubelet[2712]: E0707 00:14:16.388464 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.388662 kubelet[2712]: E0707 00:14:16.388643 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.388662 kubelet[2712]: W0707 00:14:16.388655 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.388737 kubelet[2712]: E0707 00:14:16.388668 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.388934 kubelet[2712]: E0707 00:14:16.388902 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.388934 kubelet[2712]: W0707 00:14:16.388924 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.388934 kubelet[2712]: E0707 00:14:16.388932 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.890187 kubelet[2712]: E0707 00:14:16.890121 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.890509 kubelet[2712]: W0707 00:14:16.890302 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.890771 kubelet[2712]: E0707 00:14:16.890331 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.892705 kubelet[2712]: E0707 00:14:16.892614 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.892705 kubelet[2712]: W0707 00:14:16.892633 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.892705 kubelet[2712]: E0707 00:14:16.892653 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:16.899810 kubelet[2712]: E0707 00:14:16.899725 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:16.899810 kubelet[2712]: W0707 00:14:16.899747 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:16.899810 kubelet[2712]: E0707 00:14:16.899768 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.184442 kubelet[2712]: E0707 00:14:17.184308 2712 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 7 00:14:17.184442 kubelet[2712]: E0707 00:14:17.184426 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a71f84a6-9346-4643-9e48-b8820dceeb54-tigera-ca-bundle podName:a71f84a6-9346-4643-9e48-b8820dceeb54 nodeName:}" failed. No retries permitted until 2025-07-07 00:14:17.684396462 +0000 UTC m=+19.907709499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/a71f84a6-9346-4643-9e48-b8820dceeb54-tigera-ca-bundle") pod "calico-typha-554c46f976-v4pqw" (UID: "a71f84a6-9346-4643-9e48-b8820dceeb54") : failed to sync configmap cache: timed out waiting for the condition Jul 7 00:14:17.191594 kubelet[2712]: E0707 00:14:17.191563 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.191594 kubelet[2712]: W0707 00:14:17.191587 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.191682 kubelet[2712]: E0707 00:14:17.191611 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.283644 kubelet[2712]: E0707 00:14:17.283580 2712 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 7 00:14:17.283811 kubelet[2712]: E0707 00:14:17.283668 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7476baf-b6ee-47d6-999f-3139f751746b-tigera-ca-bundle podName:c7476baf-b6ee-47d6-999f-3139f751746b nodeName:}" failed. No retries permitted until 2025-07-07 00:14:17.783643471 +0000 UTC m=+20.006956498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/c7476baf-b6ee-47d6-999f-3139f751746b-tigera-ca-bundle") pod "calico-node-q8gfm" (UID: "c7476baf-b6ee-47d6-999f-3139f751746b") : failed to sync configmap cache: timed out waiting for the condition Jul 7 00:14:17.293203 kubelet[2712]: E0707 00:14:17.293165 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.293203 kubelet[2712]: W0707 00:14:17.293188 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.293203 kubelet[2712]: E0707 00:14:17.293210 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.293497 kubelet[2712]: E0707 00:14:17.293485 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.293497 kubelet[2712]: W0707 00:14:17.293495 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.293582 kubelet[2712]: E0707 00:14:17.293506 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.394387 kubelet[2712]: E0707 00:14:17.394352 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.394519 kubelet[2712]: W0707 00:14:17.394393 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.394519 kubelet[2712]: E0707 00:14:17.394412 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.395094 kubelet[2712]: E0707 00:14:17.394762 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.395094 kubelet[2712]: W0707 00:14:17.394804 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.395094 kubelet[2712]: E0707 00:14:17.394815 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.496697 kubelet[2712]: E0707 00:14:17.496650 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.496845 kubelet[2712]: W0707 00:14:17.496688 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.496888 kubelet[2712]: E0707 00:14:17.496856 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.497469 kubelet[2712]: E0707 00:14:17.497421 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.497531 kubelet[2712]: W0707 00:14:17.497465 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.497531 kubelet[2712]: E0707 00:14:17.497518 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.598853 kubelet[2712]: E0707 00:14:17.598815 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.598853 kubelet[2712]: W0707 00:14:17.598838 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.598853 kubelet[2712]: E0707 00:14:17.598858 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.599124 kubelet[2712]: E0707 00:14:17.599097 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.599124 kubelet[2712]: W0707 00:14:17.599114 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.599124 kubelet[2712]: E0707 00:14:17.599124 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.700186 kubelet[2712]: E0707 00:14:17.700133 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.700377 kubelet[2712]: W0707 00:14:17.700340 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.700377 kubelet[2712]: E0707 00:14:17.700374 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.700771 kubelet[2712]: E0707 00:14:17.700711 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.700771 kubelet[2712]: W0707 00:14:17.700737 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.700771 kubelet[2712]: E0707 00:14:17.700763 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.701167 kubelet[2712]: E0707 00:14:17.701110 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.701167 kubelet[2712]: W0707 00:14:17.701137 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.701267 kubelet[2712]: E0707 00:14:17.701193 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.701519 kubelet[2712]: E0707 00:14:17.701484 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.701519 kubelet[2712]: W0707 00:14:17.701506 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.701628 kubelet[2712]: E0707 00:14:17.701573 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.701931 kubelet[2712]: E0707 00:14:17.701873 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.701931 kubelet[2712]: W0707 00:14:17.701900 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.701931 kubelet[2712]: E0707 00:14:17.701921 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.702385 kubelet[2712]: E0707 00:14:17.702359 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.702426 kubelet[2712]: W0707 00:14:17.702385 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.702426 kubelet[2712]: E0707 00:14:17.702405 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.703799 kubelet[2712]: E0707 00:14:17.703767 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.703863 kubelet[2712]: W0707 00:14:17.703796 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.703863 kubelet[2712]: E0707 00:14:17.703817 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.801640 kubelet[2712]: E0707 00:14:17.801546 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.801640 kubelet[2712]: W0707 00:14:17.801566 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.801640 kubelet[2712]: E0707 00:14:17.801583 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.802704 kubelet[2712]: E0707 00:14:17.802531 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.802704 kubelet[2712]: W0707 00:14:17.802550 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.802704 kubelet[2712]: E0707 00:14:17.802574 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.802916 kubelet[2712]: E0707 00:14:17.802888 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.802916 kubelet[2712]: W0707 00:14:17.802904 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.802977 kubelet[2712]: E0707 00:14:17.802917 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.803247 kubelet[2712]: E0707 00:14:17.803093 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.803247 kubelet[2712]: W0707 00:14:17.803107 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.803247 kubelet[2712]: E0707 00:14:17.803119 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.803398 kubelet[2712]: E0707 00:14:17.803386 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.803489 kubelet[2712]: W0707 00:14:17.803460 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.803489 kubelet[2712]: E0707 00:14:17.803480 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.804339 kubelet[2712]: E0707 00:14:17.804316 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:17.804339 kubelet[2712]: W0707 00:14:17.804332 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:17.804424 kubelet[2712]: E0707 00:14:17.804343 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:17.866269 containerd[1507]: time="2025-07-07T00:14:17.866216753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-554c46f976-v4pqw,Uid:a71f84a6-9346-4643-9e48-b8820dceeb54,Namespace:calico-system,Attempt:0,}" Jul 7 00:14:17.871073 kubelet[2712]: E0707 00:14:17.870834 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qbjk" podUID="2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6" Jul 7 00:14:17.895641 containerd[1507]: time="2025-07-07T00:14:17.895456500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:17.895859 containerd[1507]: time="2025-07-07T00:14:17.895510848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:17.895859 containerd[1507]: time="2025-07-07T00:14:17.895635206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:17.895859 containerd[1507]: time="2025-07-07T00:14:17.895810809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:17.923283 systemd[1]: Started cri-containerd-9489d620255446c5aa08937e172ad0dc87a6fa301c59459b6426697142dce149.scope - libcontainer container 9489d620255446c5aa08937e172ad0dc87a6fa301c59459b6426697142dce149. Jul 7 00:14:17.954965 containerd[1507]: time="2025-07-07T00:14:17.954920186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q8gfm,Uid:c7476baf-b6ee-47d6-999f-3139f751746b,Namespace:calico-system,Attempt:0,}" Jul 7 00:14:17.969778 containerd[1507]: time="2025-07-07T00:14:17.969309101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-554c46f976-v4pqw,Uid:a71f84a6-9346-4643-9e48-b8820dceeb54,Namespace:calico-system,Attempt:0,} returns sandbox id \"9489d620255446c5aa08937e172ad0dc87a6fa301c59459b6426697142dce149\"" Jul 7 00:14:17.978292 containerd[1507]: time="2025-07-07T00:14:17.977361971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:14:17.988077 containerd[1507]: time="2025-07-07T00:14:17.987838765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:17.988077 containerd[1507]: time="2025-07-07T00:14:17.987956981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:17.988077 containerd[1507]: time="2025-07-07T00:14:17.988016030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:17.988868 containerd[1507]: time="2025-07-07T00:14:17.988714791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:18.010567 systemd[1]: Started cri-containerd-61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5.scope - libcontainer container 61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5. Jul 7 00:14:18.032799 containerd[1507]: time="2025-07-07T00:14:18.032751343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q8gfm,Uid:c7476baf-b6ee-47d6-999f-3139f751746b,Namespace:calico-system,Attempt:0,} returns sandbox id \"61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5\"" Jul 7 00:14:19.610235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746958676.mount: Deactivated successfully. Jul 7 00:14:19.883598 kubelet[2712]: E0707 00:14:19.883239 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qbjk" podUID="2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6" Jul 7 00:14:19.956796 containerd[1507]: time="2025-07-07T00:14:19.956690916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:19.957713 containerd[1507]: time="2025-07-07T00:14:19.957677272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:14:19.958539 containerd[1507]: time="2025-07-07T00:14:19.958505387Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:19.960774 containerd[1507]: time="2025-07-07T00:14:19.960724756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:19.987532 containerd[1507]: time="2025-07-07T00:14:19.987391924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.008932913s" Jul 7 00:14:19.987532 containerd[1507]: time="2025-07-07T00:14:19.987424895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:14:20.001368 containerd[1507]: time="2025-07-07T00:14:20.001313759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:14:20.025263 containerd[1507]: time="2025-07-07T00:14:20.025212891Z" level=info msg="CreateContainer within sandbox \"9489d620255446c5aa08937e172ad0dc87a6fa301c59459b6426697142dce149\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:14:20.044433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562449517.mount: Deactivated successfully. Jul 7 00:14:20.048011 containerd[1507]: time="2025-07-07T00:14:20.047972158Z" level=info msg="CreateContainer within sandbox \"9489d620255446c5aa08937e172ad0dc87a6fa301c59459b6426697142dce149\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ac09911b4032ed795f34291aa76f6231dd929b97e17eb745ed175868f4309b85\"" Jul 7 00:14:20.048817 containerd[1507]: time="2025-07-07T00:14:20.048790480Z" level=info msg="StartContainer for \"ac09911b4032ed795f34291aa76f6231dd929b97e17eb745ed175868f4309b85\"" Jul 7 00:14:20.086789 systemd[1]: Started cri-containerd-ac09911b4032ed795f34291aa76f6231dd929b97e17eb745ed175868f4309b85.scope - libcontainer container ac09911b4032ed795f34291aa76f6231dd929b97e17eb745ed175868f4309b85. Jul 7 00:14:20.133266 containerd[1507]: time="2025-07-07T00:14:20.133137130Z" level=info msg="StartContainer for \"ac09911b4032ed795f34291aa76f6231dd929b97e17eb745ed175868f4309b85\" returns successfully" Jul 7 00:14:21.014798 kubelet[2712]: E0707 00:14:21.014749 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.014798 kubelet[2712]: W0707 00:14:21.014787 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.021876 kubelet[2712]: E0707 00:14:21.021826 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.022723 kubelet[2712]: E0707 00:14:21.022581 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.022723 kubelet[2712]: W0707 00:14:21.022600 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.022723 kubelet[2712]: E0707 00:14:21.022622 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.023638 kubelet[2712]: E0707 00:14:21.023434 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.023638 kubelet[2712]: W0707 00:14:21.023461 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.023638 kubelet[2712]: E0707 00:14:21.023470 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.023816 kubelet[2712]: E0707 00:14:21.023801 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.023816 kubelet[2712]: W0707 00:14:21.023813 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.023874 kubelet[2712]: E0707 00:14:21.023822 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.024260 kubelet[2712]: E0707 00:14:21.024204 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.024260 kubelet[2712]: W0707 00:14:21.024215 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.024260 kubelet[2712]: E0707 00:14:21.024223 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.024764 kubelet[2712]: E0707 00:14:21.024461 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.024764 kubelet[2712]: W0707 00:14:21.024470 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.024764 kubelet[2712]: E0707 00:14:21.024477 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.024764 kubelet[2712]: E0707 00:14:21.024654 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.024764 kubelet[2712]: W0707 00:14:21.024662 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.024764 kubelet[2712]: E0707 00:14:21.024669 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.024872 kubelet[2712]: E0707 00:14:21.024815 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.024872 kubelet[2712]: W0707 00:14:21.024823 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.024872 kubelet[2712]: E0707 00:14:21.024832 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.025212 kubelet[2712]: E0707 00:14:21.025030 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.025212 kubelet[2712]: W0707 00:14:21.025038 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.025212 kubelet[2712]: E0707 00:14:21.025059 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.026162 kubelet[2712]: E0707 00:14:21.025414 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.026162 kubelet[2712]: W0707 00:14:21.025425 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.026162 kubelet[2712]: E0707 00:14:21.025433 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.026162 kubelet[2712]: E0707 00:14:21.025603 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.026162 kubelet[2712]: W0707 00:14:21.025611 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.026162 kubelet[2712]: E0707 00:14:21.025618 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.026162 kubelet[2712]: E0707 00:14:21.025799 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.026162 kubelet[2712]: W0707 00:14:21.025829 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.026162 kubelet[2712]: E0707 00:14:21.025837 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.026162 kubelet[2712]: E0707 00:14:21.025986 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.026405 kubelet[2712]: W0707 00:14:21.025994 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.026405 kubelet[2712]: E0707 00:14:21.026001 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.026577 kubelet[2712]: E0707 00:14:21.026541 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.026577 kubelet[2712]: W0707 00:14:21.026567 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.027243 kubelet[2712]: E0707 00:14:21.026591 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.027243 kubelet[2712]: E0707 00:14:21.026823 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.027243 kubelet[2712]: W0707 00:14:21.026833 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.027243 kubelet[2712]: E0707 00:14:21.026841 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.027654 kubelet[2712]: E0707 00:14:21.027608 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.027654 kubelet[2712]: W0707 00:14:21.027625 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.027654 kubelet[2712]: E0707 00:14:21.027634 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.027888 kubelet[2712]: E0707 00:14:21.027869 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.027888 kubelet[2712]: W0707 00:14:21.027882 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.027888 kubelet[2712]: E0707 00:14:21.027889 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.028069 kubelet[2712]: E0707 00:14:21.028042 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.028069 kubelet[2712]: W0707 00:14:21.028050 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.028069 kubelet[2712]: E0707 00:14:21.028057 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.029932 kubelet[2712]: E0707 00:14:21.029907 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.029932 kubelet[2712]: W0707 00:14:21.029923 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.029932 kubelet[2712]: E0707 00:14:21.029933 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.030554 kubelet[2712]: E0707 00:14:21.030532 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.030554 kubelet[2712]: W0707 00:14:21.030546 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.030554 kubelet[2712]: E0707 00:14:21.030554 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.030834 kubelet[2712]: E0707 00:14:21.030710 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.030834 kubelet[2712]: W0707 00:14:21.030717 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.030834 kubelet[2712]: E0707 00:14:21.030728 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.032759 kubelet[2712]: E0707 00:14:21.032731 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.032759 kubelet[2712]: W0707 00:14:21.032746 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.032759 kubelet[2712]: E0707 00:14:21.032758 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.033166 kubelet[2712]: E0707 00:14:21.033117 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.033166 kubelet[2712]: W0707 00:14:21.033135 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.033291 kubelet[2712]: E0707 00:14:21.033249 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.033372 kubelet[2712]: E0707 00:14:21.033361 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.033397 kubelet[2712]: W0707 00:14:21.033372 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.033522 kubelet[2712]: E0707 00:14:21.033467 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.033558 kubelet[2712]: E0707 00:14:21.033549 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.033558 kubelet[2712]: W0707 00:14:21.033556 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.033610 kubelet[2712]: E0707 00:14:21.033573 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.033721 kubelet[2712]: E0707 00:14:21.033701 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.033721 kubelet[2712]: W0707 00:14:21.033711 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.033844 kubelet[2712]: E0707 00:14:21.033721 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.033964 kubelet[2712]: E0707 00:14:21.033940 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.033964 kubelet[2712]: W0707 00:14:21.033954 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.034014 kubelet[2712]: E0707 00:14:21.033967 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.034172 kubelet[2712]: E0707 00:14:21.034129 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.034172 kubelet[2712]: W0707 00:14:21.034167 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.034228 kubelet[2712]: E0707 00:14:21.034178 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.034531 kubelet[2712]: E0707 00:14:21.034507 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.034531 kubelet[2712]: W0707 00:14:21.034521 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.034531 kubelet[2712]: E0707 00:14:21.034528 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.034758 kubelet[2712]: E0707 00:14:21.034711 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.034758 kubelet[2712]: W0707 00:14:21.034721 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.034758 kubelet[2712]: E0707 00:14:21.034739 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.035538 kubelet[2712]: E0707 00:14:21.034883 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.035538 kubelet[2712]: W0707 00:14:21.034893 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.035538 kubelet[2712]: E0707 00:14:21.034901 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.035538 kubelet[2712]: E0707 00:14:21.035097 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.035538 kubelet[2712]: W0707 00:14:21.035104 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.035538 kubelet[2712]: E0707 00:14:21.035112 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.037703 kubelet[2712]: E0707 00:14:21.037629 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:14:21.037703 kubelet[2712]: W0707 00:14:21.037668 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:14:21.037703 kubelet[2712]: E0707 00:14:21.037680 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:14:21.583082 containerd[1507]: time="2025-07-07T00:14:21.583005559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:21.585015 containerd[1507]: time="2025-07-07T00:14:21.584970355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:14:21.587626 containerd[1507]: time="2025-07-07T00:14:21.587577469Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:21.589810 containerd[1507]: time="2025-07-07T00:14:21.589767271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:21.590452 containerd[1507]: time="2025-07-07T00:14:21.590307530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.588948569s" Jul 7 00:14:21.590452 containerd[1507]: time="2025-07-07T00:14:21.590357222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:14:21.592786 containerd[1507]: time="2025-07-07T00:14:21.592722669Z" level=info msg="CreateContainer within sandbox \"61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:14:21.652210 containerd[1507]: time="2025-07-07T00:14:21.652120855Z" level=info msg="CreateContainer within sandbox \"61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf\"" Jul 7 00:14:21.654618 containerd[1507]: time="2025-07-07T00:14:21.652591736Z" level=info msg="StartContainer for \"7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf\"" Jul 7 00:14:21.700303 systemd[1]: Started cri-containerd-7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf.scope - libcontainer container 7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf. Jul 7 00:14:21.722756 containerd[1507]: time="2025-07-07T00:14:21.722700420Z" level=info msg="StartContainer for \"7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf\" returns successfully" Jul 7 00:14:21.736026 systemd[1]: cri-containerd-7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf.scope: Deactivated successfully. Jul 7 00:14:21.758337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf-rootfs.mount: Deactivated successfully. Jul 7 00:14:21.779183 containerd[1507]: time="2025-07-07T00:14:21.771632942Z" level=info msg="shim disconnected" id=7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf namespace=k8s.io Jul 7 00:14:21.779183 containerd[1507]: time="2025-07-07T00:14:21.779105509Z" level=warning msg="cleaning up after shim disconnected" id=7ee6111d8afb05d5f6984e67dde241b913170b2f65b54b7afc66f59e7f5d7fbf namespace=k8s.io Jul 7 00:14:21.779183 containerd[1507]: time="2025-07-07T00:14:21.779121378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:14:21.870826 kubelet[2712]: E0707 00:14:21.869709 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qbjk" podUID="2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6" Jul 7 00:14:21.949920 kubelet[2712]: I0707 00:14:21.949895 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:14:21.951066 containerd[1507]: time="2025-07-07T00:14:21.951031097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:14:21.970851 kubelet[2712]: I0707 00:14:21.970165 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-554c46f976-v4pqw" podStartSLOduration=3.945635498 podStartE2EDuration="5.970127522s" podCreationTimestamp="2025-07-07 00:14:16 +0000 UTC" firstStartedPulling="2025-07-07 00:14:17.976594855 +0000 UTC m=+20.199907892" lastFinishedPulling="2025-07-07 00:14:20.001086879 +0000 UTC m=+22.224399916" observedRunningTime="2025-07-07 00:14:20.968967229 +0000 UTC m=+23.192280256" watchObservedRunningTime="2025-07-07 00:14:21.970127522 +0000 UTC m=+24.193440559" Jul 7 00:14:23.870522 kubelet[2712]: E0707 00:14:23.870486 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5qbjk" podUID="2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6" Jul 7 00:14:24.436938 containerd[1507]: time="2025-07-07T00:14:24.436872829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:24.438035 containerd[1507]: time="2025-07-07T00:14:24.437989078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:14:24.439235 containerd[1507]: time="2025-07-07T00:14:24.439033393Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:24.441535 containerd[1507]: time="2025-07-07T00:14:24.441492792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:24.442571 containerd[1507]: time="2025-07-07T00:14:24.442104581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.4910345s" Jul 7 00:14:24.442818 containerd[1507]: time="2025-07-07T00:14:24.442136090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:14:24.445649 containerd[1507]: time="2025-07-07T00:14:24.445585883Z" level=info msg="CreateContainer within sandbox \"61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:14:24.461019 containerd[1507]: time="2025-07-07T00:14:24.460985286Z" level=info msg="CreateContainer within sandbox \"61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c\"" Jul 7 00:14:24.461375 containerd[1507]: time="2025-07-07T00:14:24.461354444Z" level=info msg="StartContainer for \"6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c\"" Jul 7 00:14:24.498268 systemd[1]: Started cri-containerd-6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c.scope - libcontainer container 6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c. Jul 7 00:14:24.531060 containerd[1507]: time="2025-07-07T00:14:24.531021903Z" level=info msg="StartContainer for \"6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c\" returns successfully" Jul 7 00:14:24.888946 systemd[1]: cri-containerd-6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c.scope: Deactivated successfully. Jul 7 00:14:24.910085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c-rootfs.mount: Deactivated successfully. Jul 7 00:14:24.914105 containerd[1507]: time="2025-07-07T00:14:24.913944410Z" level=info msg="shim disconnected" id=6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c namespace=k8s.io Jul 7 00:14:24.914105 containerd[1507]: time="2025-07-07T00:14:24.914099457Z" level=warning msg="cleaning up after shim disconnected" id=6fc8fd57229c5a28c07e949103acf60d749c8cf129f082858643cbb7ec2d2e3c namespace=k8s.io Jul 7 00:14:24.914105 containerd[1507]: time="2025-07-07T00:14:24.914109156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:14:24.958552 containerd[1507]: time="2025-07-07T00:14:24.958361031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:14:24.959340 kubelet[2712]: I0707 00:14:24.959314 2712 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:14:25.010856 systemd[1]: Created slice kubepods-besteffort-pod411d6daa_0c1d_48e1_908e_bd61e12d7879.slice - libcontainer container kubepods-besteffort-pod411d6daa_0c1d_48e1_908e_bd61e12d7879.slice. Jul 7 00:14:25.029768 systemd[1]: Created slice kubepods-besteffort-podd97012fa_4c0e_4428_b076_69d838ad32a3.slice - libcontainer container kubepods-besteffort-podd97012fa_4c0e_4428_b076_69d838ad32a3.slice. Jul 7 00:14:25.033648 systemd[1]: Created slice kubepods-besteffort-pod4afa15e9_a817_4cbf_8d61_a91ddd7b4568.slice - libcontainer container kubepods-besteffort-pod4afa15e9_a817_4cbf_8d61_a91ddd7b4568.slice. Jul 7 00:14:25.039998 systemd[1]: Created slice kubepods-besteffort-pod02a7d5a0_fb36_4269_a981_53d0ee8cb78e.slice - libcontainer container kubepods-besteffort-pod02a7d5a0_fb36_4269_a981_53d0ee8cb78e.slice. Jul 7 00:14:25.046279 systemd[1]: Created slice kubepods-burstable-pod298451b0_5619_4a6f_8aad_35320d360358.slice - libcontainer container kubepods-burstable-pod298451b0_5619_4a6f_8aad_35320d360358.slice. Jul 7 00:14:25.054522 systemd[1]: Created slice kubepods-burstable-pod41c99768_274d_463a_99f4_28ba08a6a5e5.slice - libcontainer container kubepods-burstable-pod41c99768_274d_463a_99f4_28ba08a6a5e5.slice. Jul 7 00:14:25.063015 systemd[1]: Created slice kubepods-besteffort-podcadda304_ba74_494d_a646_71ac7cfc9132.slice - libcontainer container kubepods-besteffort-podcadda304_ba74_494d_a646_71ac7cfc9132.slice. Jul 7 00:14:25.157389 kubelet[2712]: I0707 00:14:25.156033 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41c99768-274d-463a-99f4-28ba08a6a5e5-config-volume\") pod \"coredns-668d6bf9bc-tskgm\" (UID: \"41c99768-274d-463a-99f4-28ba08a6a5e5\") " pod="kube-system/coredns-668d6bf9bc-tskgm" Jul 7 00:14:25.157389 kubelet[2712]: I0707 00:14:25.156071 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cadda304-ba74-494d-a646-71ac7cfc9132-whisker-backend-key-pair\") pod \"whisker-7c6d6c95d9-49rxq\" (UID: \"cadda304-ba74-494d-a646-71ac7cfc9132\") " pod="calico-system/whisker-7c6d6c95d9-49rxq" Jul 7 00:14:25.157389 kubelet[2712]: I0707 00:14:25.156088 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97012fa-4c0e-4428-b076-69d838ad32a3-config\") pod \"goldmane-768f4c5c69-j6knh\" (UID: \"d97012fa-4c0e-4428-b076-69d838ad32a3\") " pod="calico-system/goldmane-768f4c5c69-j6knh" Jul 7 00:14:25.157389 kubelet[2712]: I0707 00:14:25.156113 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d247f\" (UniqueName: \"kubernetes.io/projected/cadda304-ba74-494d-a646-71ac7cfc9132-kube-api-access-d247f\") pod \"whisker-7c6d6c95d9-49rxq\" (UID: \"cadda304-ba74-494d-a646-71ac7cfc9132\") " pod="calico-system/whisker-7c6d6c95d9-49rxq" Jul 7 00:14:25.157389 kubelet[2712]: I0707 00:14:25.156127 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfcnz\" (UniqueName: \"kubernetes.io/projected/298451b0-5619-4a6f-8aad-35320d360358-kube-api-access-sfcnz\") pod \"coredns-668d6bf9bc-9qbxm\" (UID: \"298451b0-5619-4a6f-8aad-35320d360358\") " pod="kube-system/coredns-668d6bf9bc-9qbxm" Jul 7 00:14:25.157562 kubelet[2712]: I0707 00:14:25.156162 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97012fa-4c0e-4428-b076-69d838ad32a3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-j6knh\" (UID: \"d97012fa-4c0e-4428-b076-69d838ad32a3\") " pod="calico-system/goldmane-768f4c5c69-j6knh" Jul 7 00:14:25.157562 kubelet[2712]: I0707 00:14:25.156186 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4afa15e9-a817-4cbf-8d61-a91ddd7b4568-calico-apiserver-certs\") pod \"calico-apiserver-6dc4856784-rg2k9\" (UID: \"4afa15e9-a817-4cbf-8d61-a91ddd7b4568\") " pod="calico-apiserver/calico-apiserver-6dc4856784-rg2k9" Jul 7 00:14:25.157562 kubelet[2712]: I0707 00:14:25.156198 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/411d6daa-0c1d-48e1-908e-bd61e12d7879-calico-apiserver-certs\") pod \"calico-apiserver-6dc4856784-l6qqr\" (UID: \"411d6daa-0c1d-48e1-908e-bd61e12d7879\") " pod="calico-apiserver/calico-apiserver-6dc4856784-l6qqr" Jul 7 00:14:25.157562 kubelet[2712]: I0707 00:14:25.156232 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2z6f\" (UniqueName: \"kubernetes.io/projected/d97012fa-4c0e-4428-b076-69d838ad32a3-kube-api-access-r2z6f\") pod \"goldmane-768f4c5c69-j6knh\" (UID: \"d97012fa-4c0e-4428-b076-69d838ad32a3\") " pod="calico-system/goldmane-768f4c5c69-j6knh" Jul 7 00:14:25.157562 kubelet[2712]: I0707 00:14:25.156248 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pt8\" (UniqueName: \"kubernetes.io/projected/4afa15e9-a817-4cbf-8d61-a91ddd7b4568-kube-api-access-b7pt8\") pod \"calico-apiserver-6dc4856784-rg2k9\" (UID: \"4afa15e9-a817-4cbf-8d61-a91ddd7b4568\") " pod="calico-apiserver/calico-apiserver-6dc4856784-rg2k9" Jul 7 00:14:25.157653 kubelet[2712]: I0707 00:14:25.156259 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdm7\" (UniqueName: \"kubernetes.io/projected/41c99768-274d-463a-99f4-28ba08a6a5e5-kube-api-access-zgdm7\") pod \"coredns-668d6bf9bc-tskgm\" (UID: \"41c99768-274d-463a-99f4-28ba08a6a5e5\") " pod="kube-system/coredns-668d6bf9bc-tskgm" Jul 7 00:14:25.157653 kubelet[2712]: I0707 00:14:25.156272 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d97012fa-4c0e-4428-b076-69d838ad32a3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-j6knh\" (UID: \"d97012fa-4c0e-4428-b076-69d838ad32a3\") " pod="calico-system/goldmane-768f4c5c69-j6knh" Jul 7 00:14:25.157653 kubelet[2712]: I0707 00:14:25.156288 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02a7d5a0-fb36-4269-a981-53d0ee8cb78e-tigera-ca-bundle\") pod \"calico-kube-controllers-c694d7bd6-rbrgf\" (UID: \"02a7d5a0-fb36-4269-a981-53d0ee8cb78e\") " pod="calico-system/calico-kube-controllers-c694d7bd6-rbrgf" Jul 7 00:14:25.157653 kubelet[2712]: I0707 00:14:25.156301 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cadda304-ba74-494d-a646-71ac7cfc9132-whisker-ca-bundle\") pod \"whisker-7c6d6c95d9-49rxq\" (UID: \"cadda304-ba74-494d-a646-71ac7cfc9132\") " pod="calico-system/whisker-7c6d6c95d9-49rxq" Jul 7 00:14:25.157653 kubelet[2712]: I0707 00:14:25.156319 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpbbz\" (UniqueName: \"kubernetes.io/projected/02a7d5a0-fb36-4269-a981-53d0ee8cb78e-kube-api-access-fpbbz\") pod \"calico-kube-controllers-c694d7bd6-rbrgf\" (UID: \"02a7d5a0-fb36-4269-a981-53d0ee8cb78e\") " pod="calico-system/calico-kube-controllers-c694d7bd6-rbrgf" Jul 7 00:14:25.157745 kubelet[2712]: I0707 00:14:25.156334 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7dtw\" (UniqueName: \"kubernetes.io/projected/411d6daa-0c1d-48e1-908e-bd61e12d7879-kube-api-access-x7dtw\") pod \"calico-apiserver-6dc4856784-l6qqr\" (UID: \"411d6daa-0c1d-48e1-908e-bd61e12d7879\") " pod="calico-apiserver/calico-apiserver-6dc4856784-l6qqr" Jul 7 00:14:25.157745 kubelet[2712]: I0707 00:14:25.156348 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/298451b0-5619-4a6f-8aad-35320d360358-config-volume\") pod \"coredns-668d6bf9bc-9qbxm\" (UID: \"298451b0-5619-4a6f-8aad-35320d360358\") " pod="kube-system/coredns-668d6bf9bc-9qbxm" Jul 7 00:14:25.325608 containerd[1507]: time="2025-07-07T00:14:25.325576402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc4856784-l6qqr,Uid:411d6daa-0c1d-48e1-908e-bd61e12d7879,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:14:25.332385 containerd[1507]: time="2025-07-07T00:14:25.332347324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j6knh,Uid:d97012fa-4c0e-4428-b076-69d838ad32a3,Namespace:calico-system,Attempt:0,}" Jul 7 00:14:25.338509 containerd[1507]: time="2025-07-07T00:14:25.338476589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc4856784-rg2k9,Uid:4afa15e9-a817-4cbf-8d61-a91ddd7b4568,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:14:25.343939 containerd[1507]: time="2025-07-07T00:14:25.343916848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c694d7bd6-rbrgf,Uid:02a7d5a0-fb36-4269-a981-53d0ee8cb78e,Namespace:calico-system,Attempt:0,}" Jul 7 00:14:25.356134 containerd[1507]: time="2025-07-07T00:14:25.356096732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9qbxm,Uid:298451b0-5619-4a6f-8aad-35320d360358,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:25.371337 containerd[1507]: time="2025-07-07T00:14:25.371264318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c6d6c95d9-49rxq,Uid:cadda304-ba74-494d-a646-71ac7cfc9132,Namespace:calico-system,Attempt:0,}" Jul 7 00:14:25.372130 containerd[1507]: time="2025-07-07T00:14:25.372023273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tskgm,Uid:41c99768-274d-463a-99f4-28ba08a6a5e5,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:25.622434 containerd[1507]: time="2025-07-07T00:14:25.622349733Z" level=error msg="Failed to destroy network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.626056 containerd[1507]: time="2025-07-07T00:14:25.625409562Z" level=error msg="encountered an error cleaning up failed sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.626056 containerd[1507]: time="2025-07-07T00:14:25.625463351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9qbxm,Uid:298451b0-5619-4a6f-8aad-35320d360358,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.626820 kubelet[2712]: E0707 00:14:25.626772 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.626953 kubelet[2712]: E0707 00:14:25.626938 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9qbxm" Jul 7 00:14:25.627238 kubelet[2712]: E0707 00:14:25.627198 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9qbxm" Jul 7 00:14:25.628002 kubelet[2712]: E0707 00:14:25.627647 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9qbxm_kube-system(298451b0-5619-4a6f-8aad-35320d360358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9qbxm_kube-system(298451b0-5619-4a6f-8aad-35320d360358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9qbxm" podUID="298451b0-5619-4a6f-8aad-35320d360358" Jul 7 00:14:25.627825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7-shm.mount: Deactivated successfully. Jul 7 00:14:25.651998 containerd[1507]: time="2025-07-07T00:14:25.650535285Z" level=error msg="Failed to destroy network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.654799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4-shm.mount: Deactivated successfully. Jul 7 00:14:25.658018 containerd[1507]: time="2025-07-07T00:14:25.657777375Z" level=error msg="encountered an error cleaning up failed sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.658729 containerd[1507]: time="2025-07-07T00:14:25.658700509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c6d6c95d9-49rxq,Uid:cadda304-ba74-494d-a646-71ac7cfc9132,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.659415 kubelet[2712]: E0707 00:14:25.659386 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.660320 kubelet[2712]: E0707 00:14:25.659580 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c6d6c95d9-49rxq" Jul 7 00:14:25.660320 kubelet[2712]: E0707 00:14:25.659612 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c6d6c95d9-49rxq" Jul 7 00:14:25.660320 kubelet[2712]: E0707 00:14:25.659863 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c6d6c95d9-49rxq_calico-system(cadda304-ba74-494d-a646-71ac7cfc9132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c6d6c95d9-49rxq_calico-system(cadda304-ba74-494d-a646-71ac7cfc9132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c6d6c95d9-49rxq" podUID="cadda304-ba74-494d-a646-71ac7cfc9132" Jul 7 00:14:25.676473 containerd[1507]: time="2025-07-07T00:14:25.671980734Z" level=error msg="Failed to destroy network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.677061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b-shm.mount: Deactivated successfully. Jul 7 00:14:25.678494 containerd[1507]: time="2025-07-07T00:14:25.678368831Z" level=error msg="Failed to destroy network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.679064 containerd[1507]: time="2025-07-07T00:14:25.679036447Z" level=error msg="encountered an error cleaning up failed sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.679216 containerd[1507]: time="2025-07-07T00:14:25.679179855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc4856784-rg2k9,Uid:4afa15e9-a817-4cbf-8d61-a91ddd7b4568,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.679725 kubelet[2712]: E0707 00:14:25.679451 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.679725 kubelet[2712]: E0707 00:14:25.679497 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc4856784-rg2k9" Jul 7 00:14:25.679725 kubelet[2712]: E0707 00:14:25.679517 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc4856784-rg2k9" Jul 7 00:14:25.680232 kubelet[2712]: E0707 00:14:25.679550 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dc4856784-rg2k9_calico-apiserver(4afa15e9-a817-4cbf-8d61-a91ddd7b4568)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dc4856784-rg2k9_calico-apiserver(4afa15e9-a817-4cbf-8d61-a91ddd7b4568)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc4856784-rg2k9" podUID="4afa15e9-a817-4cbf-8d61-a91ddd7b4568" Jul 7 00:14:25.681853 containerd[1507]: time="2025-07-07T00:14:25.681356636Z" level=error msg="Failed to destroy network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.681918 containerd[1507]: time="2025-07-07T00:14:25.681733388Z" level=error msg="encountered an error cleaning up failed sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.681965 containerd[1507]: time="2025-07-07T00:14:25.681864182Z" level=error msg="encountered an error cleaning up failed sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.681991 containerd[1507]: time="2025-07-07T00:14:25.681974388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j6knh,Uid:d97012fa-4c0e-4428-b076-69d838ad32a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.682482 containerd[1507]: time="2025-07-07T00:14:25.682080225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc4856784-l6qqr,Uid:411d6daa-0c1d-48e1-908e-bd61e12d7879,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.682567 kubelet[2712]: E0707 00:14:25.682106 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.682567 kubelet[2712]: E0707 00:14:25.682153 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-j6knh" Jul 7 00:14:25.682567 kubelet[2712]: E0707 00:14:25.682170 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-j6knh" Jul 7 00:14:25.682669 kubelet[2712]: E0707 00:14:25.682198 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-j6knh_calico-system(d97012fa-4c0e-4428-b076-69d838ad32a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-j6knh_calico-system(d97012fa-4c0e-4428-b076-69d838ad32a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-j6knh" podUID="d97012fa-4c0e-4428-b076-69d838ad32a3" Jul 7 00:14:25.682669 kubelet[2712]: E0707 00:14:25.682264 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.682669 kubelet[2712]: E0707 00:14:25.682280 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc4856784-l6qqr" Jul 7 00:14:25.682801 kubelet[2712]: E0707 00:14:25.682293 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc4856784-l6qqr" Jul 7 00:14:25.682801 kubelet[2712]: E0707 00:14:25.682313 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dc4856784-l6qqr_calico-apiserver(411d6daa-0c1d-48e1-908e-bd61e12d7879)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dc4856784-l6qqr_calico-apiserver(411d6daa-0c1d-48e1-908e-bd61e12d7879)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc4856784-l6qqr" podUID="411d6daa-0c1d-48e1-908e-bd61e12d7879" Jul 7 00:14:25.684301 containerd[1507]: time="2025-07-07T00:14:25.684280250Z" level=error msg="Failed to destroy network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.684525 containerd[1507]: time="2025-07-07T00:14:25.684505871Z" level=error msg="encountered an error cleaning up failed sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.684767 containerd[1507]: time="2025-07-07T00:14:25.684591961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tskgm,Uid:41c99768-274d-463a-99f4-28ba08a6a5e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.684813 kubelet[2712]: E0707 00:14:25.684691 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.684813 kubelet[2712]: E0707 00:14:25.684717 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tskgm" Jul 7 00:14:25.684813 kubelet[2712]: E0707 00:14:25.684738 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tskgm" Jul 7 00:14:25.684986 kubelet[2712]: E0707 00:14:25.684907 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tskgm_kube-system(41c99768-274d-463a-99f4-28ba08a6a5e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tskgm_kube-system(41c99768-274d-463a-99f4-28ba08a6a5e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tskgm" podUID="41c99768-274d-463a-99f4-28ba08a6a5e5" Jul 7 00:14:25.687484 containerd[1507]: time="2025-07-07T00:14:25.687442920Z" level=error msg="Failed to destroy network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.687721 containerd[1507]: time="2025-07-07T00:14:25.687687607Z" level=error msg="encountered an error cleaning up failed sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.687857 containerd[1507]: time="2025-07-07T00:14:25.687720728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c694d7bd6-rbrgf,Uid:02a7d5a0-fb36-4269-a981-53d0ee8cb78e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.687928 kubelet[2712]: E0707 00:14:25.687906 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.687974 kubelet[2712]: E0707 00:14:25.687933 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c694d7bd6-rbrgf" Jul 7 00:14:25.688028 kubelet[2712]: E0707 00:14:25.687979 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c694d7bd6-rbrgf" Jul 7 00:14:25.688028 kubelet[2712]: E0707 00:14:25.688010 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c694d7bd6-rbrgf_calico-system(02a7d5a0-fb36-4269-a981-53d0ee8cb78e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c694d7bd6-rbrgf_calico-system(02a7d5a0-fb36-4269-a981-53d0ee8cb78e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c694d7bd6-rbrgf" podUID="02a7d5a0-fb36-4269-a981-53d0ee8cb78e" Jul 7 00:14:25.879699 systemd[1]: Created slice kubepods-besteffort-pod2d14ecfa_e7e4_4bf3_a0e0_11dc0ae5c7c6.slice - libcontainer container kubepods-besteffort-pod2d14ecfa_e7e4_4bf3_a0e0_11dc0ae5c7c6.slice. Jul 7 00:14:25.883677 containerd[1507]: time="2025-07-07T00:14:25.883630498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5qbjk,Uid:2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6,Namespace:calico-system,Attempt:0,}" Jul 7 00:14:25.950121 containerd[1507]: time="2025-07-07T00:14:25.950048127Z" level=error msg="Failed to destroy network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.950461 containerd[1507]: time="2025-07-07T00:14:25.950429748Z" level=error msg="encountered an error cleaning up failed sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.950531 containerd[1507]: time="2025-07-07T00:14:25.950492906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5qbjk,Uid:2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.950760 kubelet[2712]: E0707 00:14:25.950723 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:25.950832 kubelet[2712]: E0707 00:14:25.950790 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5qbjk" Jul 7 00:14:25.950866 kubelet[2712]: E0707 00:14:25.950825 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5qbjk" Jul 7 00:14:25.951100 kubelet[2712]: E0707 00:14:25.950879 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5qbjk_calico-system(2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5qbjk_calico-system(2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5qbjk" podUID="2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6" Jul 7 00:14:25.958731 kubelet[2712]: I0707 00:14:25.958701 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:25.960596 kubelet[2712]: I0707 00:14:25.960560 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:25.965519 containerd[1507]: time="2025-07-07T00:14:25.965304008Z" level=info msg="StopPodSandbox for \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\"" Jul 7 00:14:25.965519 containerd[1507]: time="2025-07-07T00:14:25.965329405Z" level=info msg="StopPodSandbox for \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\"" Jul 7 00:14:25.969279 containerd[1507]: time="2025-07-07T00:14:25.968933920Z" level=info msg="Ensure that sandbox a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71 in task-service has been cleanup successfully" Jul 7 00:14:25.975537 kubelet[2712]: I0707 00:14:25.973289 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:25.975639 containerd[1507]: time="2025-07-07T00:14:25.973740246Z" level=info msg="StopPodSandbox for \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\"" Jul 7 00:14:25.975639 containerd[1507]: time="2025-07-07T00:14:25.973855330Z" level=info msg="Ensure that sandbox ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4 in task-service has been cleanup successfully" Jul 7 00:14:25.980247 kubelet[2712]: I0707 00:14:25.980213 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:25.980710 containerd[1507]: time="2025-07-07T00:14:25.980692877Z" level=info msg="StopPodSandbox for \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\"" Jul 7 00:14:25.981386 kubelet[2712]: I0707 00:14:25.981366 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:25.982055 containerd[1507]: time="2025-07-07T00:14:25.982014753Z" level=info msg="Ensure that sandbox b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7 in task-service has been cleanup successfully" Jul 7 00:14:25.982225 containerd[1507]: time="2025-07-07T00:14:25.982209116Z" level=info msg="Ensure that sandbox 7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b in task-service has been cleanup successfully" Jul 7 00:14:25.984802 containerd[1507]: time="2025-07-07T00:14:25.984606849Z" level=info msg="StopPodSandbox for \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\"" Jul 7 00:14:25.984802 containerd[1507]: time="2025-07-07T00:14:25.984703149Z" level=info msg="Ensure that sandbox 77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce in task-service has been cleanup successfully" Jul 7 00:14:25.987555 kubelet[2712]: I0707 00:14:25.987534 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:25.989543 containerd[1507]: time="2025-07-07T00:14:25.989525666Z" level=info msg="StopPodSandbox for \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\"" Jul 7 00:14:25.989606 kubelet[2712]: I0707 00:14:25.989572 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:25.990051 containerd[1507]: time="2025-07-07T00:14:25.990034995Z" level=info msg="Ensure that sandbox 11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1 in task-service has been cleanup successfully" Jul 7 00:14:25.994953 containerd[1507]: time="2025-07-07T00:14:25.990187841Z" level=info msg="StopPodSandbox for \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\"" Jul 7 00:14:25.994953 containerd[1507]: time="2025-07-07T00:14:25.994790889Z" level=info msg="Ensure that sandbox 0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501 in task-service has been cleanup successfully" Jul 7 00:14:25.995742 kubelet[2712]: I0707 00:14:25.995714 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:25.996702 containerd[1507]: time="2025-07-07T00:14:25.996359976Z" level=info msg="StopPodSandbox for \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\"" Jul 7 00:14:25.997276 containerd[1507]: time="2025-07-07T00:14:25.997106508Z" level=info msg="Ensure that sandbox f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f in task-service has been cleanup successfully" Jul 7 00:14:26.060930 containerd[1507]: time="2025-07-07T00:14:26.060839590Z" level=error msg="StopPodSandbox for \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\" failed" error="failed to destroy network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:26.061390 kubelet[2712]: E0707 00:14:26.061191 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:26.061390 kubelet[2712]: E0707 00:14:26.061272 2712 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71"} Jul 7 00:14:26.061390 kubelet[2712]: E0707 00:14:26.061328 2712 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02a7d5a0-fb36-4269-a981-53d0ee8cb78e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:14:26.061390 kubelet[2712]: E0707 00:14:26.061366 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02a7d5a0-fb36-4269-a981-53d0ee8cb78e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c694d7bd6-rbrgf" podUID="02a7d5a0-fb36-4269-a981-53d0ee8cb78e" Jul 7 00:14:26.069336 containerd[1507]: time="2025-07-07T00:14:26.069116479Z" level=error msg="StopPodSandbox for \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\" failed" error="failed to destroy network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:26.069654 kubelet[2712]: E0707 00:14:26.069374 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:26.069654 kubelet[2712]: E0707 00:14:26.069417 2712 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7"} Jul 7 00:14:26.069654 kubelet[2712]: E0707 00:14:26.069442 2712 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"298451b0-5619-4a6f-8aad-35320d360358\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:14:26.069654 kubelet[2712]: E0707 00:14:26.069460 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"298451b0-5619-4a6f-8aad-35320d360358\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9qbxm" podUID="298451b0-5619-4a6f-8aad-35320d360358" Jul 7 00:14:26.083151 containerd[1507]: time="2025-07-07T00:14:26.083062938Z" level=error msg="StopPodSandbox for \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\" failed" error="failed to destroy network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:26.083530 kubelet[2712]: E0707 00:14:26.083391 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:26.083530 kubelet[2712]: E0707 00:14:26.083441 2712 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b"} Jul 7 00:14:26.083530 kubelet[2712]: E0707 00:14:26.083469 2712 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d97012fa-4c0e-4428-b076-69d838ad32a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:14:26.083530 kubelet[2712]: E0707 00:14:26.083490 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d97012fa-4c0e-4428-b076-69d838ad32a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-j6knh" podUID="d97012fa-4c0e-4428-b076-69d838ad32a3" Jul 7 00:14:26.084797 containerd[1507]: time="2025-07-07T00:14:26.084771020Z" level=error msg="StopPodSandbox for \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\" failed" error="failed to destroy network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:26.085115 kubelet[2712]: E0707 00:14:26.085078 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:26.085205 kubelet[2712]: E0707 00:14:26.085130 2712 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f"} Jul 7 00:14:26.085234 kubelet[2712]: E0707 00:14:26.085190 2712 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"411d6daa-0c1d-48e1-908e-bd61e12d7879\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:14:26.085234 kubelet[2712]: E0707 00:14:26.085219 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"411d6daa-0c1d-48e1-908e-bd61e12d7879\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc4856784-l6qqr" podUID="411d6daa-0c1d-48e1-908e-bd61e12d7879" Jul 7 00:14:26.086119 containerd[1507]: time="2025-07-07T00:14:26.086096057Z" level=error msg="StopPodSandbox for \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\" failed" error="failed to destroy network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:26.086664 kubelet[2712]: E0707 00:14:26.086639 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:26.086708 kubelet[2712]: E0707 00:14:26.086671 2712 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4"} Jul 7 00:14:26.086708 kubelet[2712]: E0707 00:14:26.086699 2712 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cadda304-ba74-494d-a646-71ac7cfc9132\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:14:26.086775 kubelet[2712]: E0707 00:14:26.086716 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cadda304-ba74-494d-a646-71ac7cfc9132\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c6d6c95d9-49rxq" podUID="cadda304-ba74-494d-a646-71ac7cfc9132" Jul 7 00:14:26.089122 containerd[1507]: time="2025-07-07T00:14:26.089092137Z" level=error msg="StopPodSandbox for \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\" failed" error="failed to destroy network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:26.089385 kubelet[2712]: E0707 00:14:26.089365 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:26.089502 kubelet[2712]: E0707 00:14:26.089468 2712 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce"} Jul 7 00:14:26.089570 kubelet[2712]: E0707 00:14:26.089559 2712 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41c99768-274d-463a-99f4-28ba08a6a5e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:14:26.089656 kubelet[2712]: E0707 00:14:26.089641 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41c99768-274d-463a-99f4-28ba08a6a5e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tskgm" podUID="41c99768-274d-463a-99f4-28ba08a6a5e5" Jul 7 00:14:26.090666 containerd[1507]: time="2025-07-07T00:14:26.090629780Z" level=error msg="StopPodSandbox for \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\" failed" error="failed to destroy network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:26.090749 kubelet[2712]: E0707 00:14:26.090732 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:26.090777 kubelet[2712]: E0707 00:14:26.090754 2712 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501"} Jul 7 00:14:26.090777 kubelet[2712]: E0707 00:14:26.090772 2712 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4afa15e9-a817-4cbf-8d61-a91ddd7b4568\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:14:26.091183 kubelet[2712]: E0707 00:14:26.090788 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4afa15e9-a817-4cbf-8d61-a91ddd7b4568\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc4856784-rg2k9" podUID="4afa15e9-a817-4cbf-8d61-a91ddd7b4568" Jul 7 00:14:26.092565 containerd[1507]: time="2025-07-07T00:14:26.092536885Z" level=error msg="StopPodSandbox for \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\" failed" error="failed to destroy network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:14:26.092683 kubelet[2712]: E0707 00:14:26.092652 2712 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:26.092720 kubelet[2712]: E0707 00:14:26.092704 2712 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1"} Jul 7 00:14:26.092738 kubelet[2712]: E0707 00:14:26.092725 2712 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:14:26.092779 kubelet[2712]: E0707 00:14:26.092740 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5qbjk" podUID="2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6" Jul 7 00:14:26.456635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce-shm.mount: Deactivated successfully. Jul 7 00:14:26.456718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71-shm.mount: Deactivated successfully. Jul 7 00:14:26.456774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501-shm.mount: Deactivated successfully. Jul 7 00:14:26.456823 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f-shm.mount: Deactivated successfully. Jul 7 00:14:29.037656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1152030663.mount: Deactivated successfully. Jul 7 00:14:29.073350 containerd[1507]: time="2025-07-07T00:14:29.070225673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:29.073676 containerd[1507]: time="2025-07-07T00:14:29.070751672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:14:29.092563 containerd[1507]: time="2025-07-07T00:14:29.091982515Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:29.093312 containerd[1507]: time="2025-07-07T00:14:29.092807206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:29.094067 containerd[1507]: time="2025-07-07T00:14:29.094037387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.134119401s" Jul 7 00:14:29.094579 containerd[1507]: time="2025-07-07T00:14:29.094066382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:14:29.125078 containerd[1507]: time="2025-07-07T00:14:29.125033099Z" level=info msg="CreateContainer within sandbox \"61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:14:29.163789 containerd[1507]: time="2025-07-07T00:14:29.163742756Z" level=info msg="CreateContainer within sandbox \"61b9f6102690293e70d205c424dee04192e16ccde78c3aef75a91a40a0fdaae5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bddcb7dc119b01f133257637249e1b967923a2b7916e8ef7a5a45a27a4901068\"" Jul 7 00:14:29.164264 containerd[1507]: time="2025-07-07T00:14:29.164245251Z" level=info msg="StartContainer for \"bddcb7dc119b01f133257637249e1b967923a2b7916e8ef7a5a45a27a4901068\"" Jul 7 00:14:29.209309 systemd[1]: Started cri-containerd-bddcb7dc119b01f133257637249e1b967923a2b7916e8ef7a5a45a27a4901068.scope - libcontainer container bddcb7dc119b01f133257637249e1b967923a2b7916e8ef7a5a45a27a4901068. Jul 7 00:14:29.233699 containerd[1507]: time="2025-07-07T00:14:29.233655034Z" level=info msg="StartContainer for \"bddcb7dc119b01f133257637249e1b967923a2b7916e8ef7a5a45a27a4901068\" returns successfully" Jul 7 00:14:29.319475 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:14:29.321355 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:14:29.442695 containerd[1507]: time="2025-07-07T00:14:29.442656950Z" level=info msg="StopPodSandbox for \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\"" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.526 [INFO][3876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.527 [INFO][3876] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" iface="eth0" netns="/var/run/netns/cni-181243a5-67cf-d093-b734-f125551aedbf" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.527 [INFO][3876] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" iface="eth0" netns="/var/run/netns/cni-181243a5-67cf-d093-b734-f125551aedbf" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.527 [INFO][3876] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" iface="eth0" netns="/var/run/netns/cni-181243a5-67cf-d093-b734-f125551aedbf" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.527 [INFO][3876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.527 [INFO][3876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.707 [INFO][3884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.709 [INFO][3884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.711 [INFO][3884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.718 [WARNING][3884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.718 [INFO][3884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.720 [INFO][3884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:29.723479 containerd[1507]: 2025-07-07 00:14:29.721 [INFO][3876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:29.724054 containerd[1507]: time="2025-07-07T00:14:29.723596690Z" level=info msg="TearDown network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\" successfully" Jul 7 00:14:29.724054 containerd[1507]: time="2025-07-07T00:14:29.723622538Z" level=info msg="StopPodSandbox for \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\" returns successfully" Jul 7 00:14:29.801394 kubelet[2712]: I0707 00:14:29.801307 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d247f\" (UniqueName: \"kubernetes.io/projected/cadda304-ba74-494d-a646-71ac7cfc9132-kube-api-access-d247f\") pod \"cadda304-ba74-494d-a646-71ac7cfc9132\" (UID: \"cadda304-ba74-494d-a646-71ac7cfc9132\") " Jul 7 00:14:29.811160 kubelet[2712]: I0707 00:14:29.809965 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cadda304-ba74-494d-a646-71ac7cfc9132-whisker-ca-bundle\") pod \"cadda304-ba74-494d-a646-71ac7cfc9132\" (UID: \"cadda304-ba74-494d-a646-71ac7cfc9132\") " Jul 7 00:14:29.811160 kubelet[2712]: I0707 00:14:29.810026 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cadda304-ba74-494d-a646-71ac7cfc9132-whisker-backend-key-pair\") pod \"cadda304-ba74-494d-a646-71ac7cfc9132\" (UID: \"cadda304-ba74-494d-a646-71ac7cfc9132\") " Jul 7 00:14:29.816515 kubelet[2712]: I0707 00:14:29.815463 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cadda304-ba74-494d-a646-71ac7cfc9132-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cadda304-ba74-494d-a646-71ac7cfc9132" (UID: "cadda304-ba74-494d-a646-71ac7cfc9132"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:14:29.816603 kubelet[2712]: I0707 00:14:29.815926 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cadda304-ba74-494d-a646-71ac7cfc9132-kube-api-access-d247f" (OuterVolumeSpecName: "kube-api-access-d247f") pod "cadda304-ba74-494d-a646-71ac7cfc9132" (UID: "cadda304-ba74-494d-a646-71ac7cfc9132"). InnerVolumeSpecName "kube-api-access-d247f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:14:29.816753 kubelet[2712]: I0707 00:14:29.816725 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadda304-ba74-494d-a646-71ac7cfc9132-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cadda304-ba74-494d-a646-71ac7cfc9132" (UID: "cadda304-ba74-494d-a646-71ac7cfc9132"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:14:29.902523 systemd[1]: Removed slice kubepods-besteffort-podcadda304_ba74_494d_a646_71ac7cfc9132.slice - libcontainer container kubepods-besteffort-podcadda304_ba74_494d_a646_71ac7cfc9132.slice. Jul 7 00:14:29.914265 kubelet[2712]: I0707 00:14:29.914225 2712 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cadda304-ba74-494d-a646-71ac7cfc9132-whisker-backend-key-pair\") on node \"ci-4081-3-4-f-11cbdd5b1a\" DevicePath \"\"" Jul 7 00:14:29.914365 kubelet[2712]: I0707 00:14:29.914272 2712 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d247f\" (UniqueName: \"kubernetes.io/projected/cadda304-ba74-494d-a646-71ac7cfc9132-kube-api-access-d247f\") on node \"ci-4081-3-4-f-11cbdd5b1a\" DevicePath \"\"" Jul 7 00:14:29.914365 kubelet[2712]: I0707 00:14:29.914290 2712 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cadda304-ba74-494d-a646-71ac7cfc9132-whisker-ca-bundle\") on node \"ci-4081-3-4-f-11cbdd5b1a\" DevicePath \"\"" Jul 7 00:14:30.045062 systemd[1]: run-netns-cni\x2d181243a5\x2d67cf\x2dd093\x2db734\x2df125551aedbf.mount: Deactivated successfully. Jul 7 00:14:30.045182 systemd[1]: var-lib-kubelet-pods-cadda304\x2dba74\x2d494d\x2da646\x2d71ac7cfc9132-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd247f.mount: Deactivated successfully. Jul 7 00:14:30.045253 systemd[1]: var-lib-kubelet-pods-cadda304\x2dba74\x2d494d\x2da646\x2d71ac7cfc9132-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:14:30.050802 kubelet[2712]: I0707 00:14:30.043227 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q8gfm" podStartSLOduration=2.969750577 podStartE2EDuration="14.029917149s" podCreationTimestamp="2025-07-07 00:14:16 +0000 UTC" firstStartedPulling="2025-07-07 00:14:18.034490489 +0000 UTC m=+20.257803516" lastFinishedPulling="2025-07-07 00:14:29.094657061 +0000 UTC m=+31.317970088" observedRunningTime="2025-07-07 00:14:30.029271033 +0000 UTC m=+32.252584070" watchObservedRunningTime="2025-07-07 00:14:30.029917149 +0000 UTC m=+32.253230176" Jul 7 00:14:30.128384 systemd[1]: Created slice kubepods-besteffort-pod87caa419_a341_4ba3_97a5_81b6ada7ede6.slice - libcontainer container kubepods-besteffort-pod87caa419_a341_4ba3_97a5_81b6ada7ede6.slice. Jul 7 00:14:30.217656 kubelet[2712]: I0707 00:14:30.217549 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87caa419-a341-4ba3-97a5-81b6ada7ede6-whisker-ca-bundle\") pod \"whisker-59659f6cf5-86nr7\" (UID: \"87caa419-a341-4ba3-97a5-81b6ada7ede6\") " pod="calico-system/whisker-59659f6cf5-86nr7" Jul 7 00:14:30.217656 kubelet[2712]: I0707 00:14:30.217612 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87caa419-a341-4ba3-97a5-81b6ada7ede6-whisker-backend-key-pair\") pod \"whisker-59659f6cf5-86nr7\" (UID: \"87caa419-a341-4ba3-97a5-81b6ada7ede6\") " pod="calico-system/whisker-59659f6cf5-86nr7" Jul 7 00:14:30.217656 kubelet[2712]: I0707 00:14:30.217657 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9t9x\" (UniqueName: \"kubernetes.io/projected/87caa419-a341-4ba3-97a5-81b6ada7ede6-kube-api-access-x9t9x\") pod \"whisker-59659f6cf5-86nr7\" (UID: \"87caa419-a341-4ba3-97a5-81b6ada7ede6\") " pod="calico-system/whisker-59659f6cf5-86nr7" Jul 7 00:14:30.432952 containerd[1507]: time="2025-07-07T00:14:30.432773275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59659f6cf5-86nr7,Uid:87caa419-a341-4ba3-97a5-81b6ada7ede6,Namespace:calico-system,Attempt:0,}" Jul 7 00:14:30.567935 systemd-networkd[1399]: calid28733f7665: Link UP Jul 7 00:14:30.568068 systemd-networkd[1399]: calid28733f7665: Gained carrier Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.470 [INFO][3905] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.481 [INFO][3905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0 whisker-59659f6cf5- calico-system 87caa419-a341-4ba3-97a5-81b6ada7ede6 868 0 2025-07-07 00:14:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59659f6cf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-f-11cbdd5b1a whisker-59659f6cf5-86nr7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid28733f7665 [] [] }} ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Namespace="calico-system" Pod="whisker-59659f6cf5-86nr7" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.481 [INFO][3905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Namespace="calico-system" Pod="whisker-59659f6cf5-86nr7" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.511 [INFO][3917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" HandleID="k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.511 [INFO][3917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" HandleID="k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-f-11cbdd5b1a", "pod":"whisker-59659f6cf5-86nr7", "timestamp":"2025-07-07 00:14:30.511075832 +0000 UTC"}, Hostname:"ci-4081-3-4-f-11cbdd5b1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.511 [INFO][3917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.511 [INFO][3917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.511 [INFO][3917] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-f-11cbdd5b1a' Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.521 [INFO][3917] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.532 [INFO][3917] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.537 [INFO][3917] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.539 [INFO][3917] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.542 [INFO][3917] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.542 [INFO][3917] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.544 [INFO][3917] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3 Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.547 [INFO][3917] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.552 [INFO][3917] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.57.1/26] block=192.168.57.0/26 handle="k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.552 [INFO][3917] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.1/26] handle="k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.552 [INFO][3917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:30.584071 containerd[1507]: 2025-07-07 00:14:30.552 [INFO][3917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.1/26] IPv6=[] ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" HandleID="k8s-pod-network.42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" Jul 7 00:14:30.584539 containerd[1507]: 2025-07-07 00:14:30.555 [INFO][3905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Namespace="calico-system" Pod="whisker-59659f6cf5-86nr7" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0", GenerateName:"whisker-59659f6cf5-", Namespace:"calico-system", SelfLink:"", UID:"87caa419-a341-4ba3-97a5-81b6ada7ede6", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59659f6cf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"", Pod:"whisker-59659f6cf5-86nr7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.57.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid28733f7665", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:30.584539 containerd[1507]: 2025-07-07 00:14:30.555 [INFO][3905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.1/32] ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Namespace="calico-system" Pod="whisker-59659f6cf5-86nr7" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" Jul 7 00:14:30.584539 containerd[1507]: 2025-07-07 00:14:30.555 [INFO][3905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid28733f7665 ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Namespace="calico-system" Pod="whisker-59659f6cf5-86nr7" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" Jul 7 00:14:30.584539 containerd[1507]: 2025-07-07 00:14:30.565 [INFO][3905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Namespace="calico-system" Pod="whisker-59659f6cf5-86nr7" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" Jul 7 00:14:30.584539 containerd[1507]: 2025-07-07 00:14:30.566 [INFO][3905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Namespace="calico-system" Pod="whisker-59659f6cf5-86nr7" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0", GenerateName:"whisker-59659f6cf5-", Namespace:"calico-system", SelfLink:"", UID:"87caa419-a341-4ba3-97a5-81b6ada7ede6", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59659f6cf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3", Pod:"whisker-59659f6cf5-86nr7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.57.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid28733f7665", MAC:"0a:0d:5c:1d:e3:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:30.584539 containerd[1507]: 2025-07-07 00:14:30.580 [INFO][3905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3" Namespace="calico-system" Pod="whisker-59659f6cf5-86nr7" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--59659f6cf5--86nr7-eth0" Jul 7 00:14:30.624348 containerd[1507]: time="2025-07-07T00:14:30.624014420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:30.624348 containerd[1507]: time="2025-07-07T00:14:30.624138073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:30.624348 containerd[1507]: time="2025-07-07T00:14:30.624247048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:30.624596 containerd[1507]: time="2025-07-07T00:14:30.624489304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:30.648475 systemd[1]: Started cri-containerd-42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3.scope - libcontainer container 42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3. Jul 7 00:14:30.749641 containerd[1507]: time="2025-07-07T00:14:30.749573201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59659f6cf5-86nr7,Uid:87caa419-a341-4ba3-97a5-81b6ada7ede6,Namespace:calico-system,Attempt:0,} returns sandbox id \"42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3\"" Jul 7 00:14:30.753165 containerd[1507]: time="2025-07-07T00:14:30.752398879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:14:31.034738 kernel: bpftool[4075]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:14:31.357448 systemd-networkd[1399]: vxlan.calico: Link UP Jul 7 00:14:31.357692 systemd-networkd[1399]: vxlan.calico: Gained carrier Jul 7 00:14:31.872630 kubelet[2712]: I0707 00:14:31.872560 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cadda304-ba74-494d-a646-71ac7cfc9132" path="/var/lib/kubelet/pods/cadda304-ba74-494d-a646-71ac7cfc9132/volumes" Jul 7 00:14:32.070019 systemd-networkd[1399]: calid28733f7665: Gained IPv6LL Jul 7 00:14:32.291899 containerd[1507]: time="2025-07-07T00:14:32.291830400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:32.292916 containerd[1507]: time="2025-07-07T00:14:32.292876164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:14:32.293818 containerd[1507]: time="2025-07-07T00:14:32.293778968Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:32.295597 containerd[1507]: time="2025-07-07T00:14:32.295551195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:32.296172 containerd[1507]: time="2025-07-07T00:14:32.296102055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.543677386s" Jul 7 00:14:32.296172 containerd[1507]: time="2025-07-07T00:14:32.296129947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:14:32.298717 containerd[1507]: time="2025-07-07T00:14:32.298686936Z" level=info msg="CreateContainer within sandbox \"42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:14:32.323054 containerd[1507]: time="2025-07-07T00:14:32.323008977Z" level=info msg="CreateContainer within sandbox \"42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"323089ca908b54fda6b69a2fd45237a38ee76da3bf576d777039a216c7132f27\"" Jul 7 00:14:32.324607 containerd[1507]: time="2025-07-07T00:14:32.323738063Z" level=info msg="StartContainer for \"323089ca908b54fda6b69a2fd45237a38ee76da3bf576d777039a216c7132f27\"" Jul 7 00:14:32.353300 systemd[1]: Started cri-containerd-323089ca908b54fda6b69a2fd45237a38ee76da3bf576d777039a216c7132f27.scope - libcontainer container 323089ca908b54fda6b69a2fd45237a38ee76da3bf576d777039a216c7132f27. Jul 7 00:14:32.389462 containerd[1507]: time="2025-07-07T00:14:32.389400659Z" level=info msg="StartContainer for \"323089ca908b54fda6b69a2fd45237a38ee76da3bf576d777039a216c7132f27\" returns successfully" Jul 7 00:14:32.391517 containerd[1507]: time="2025-07-07T00:14:32.391367403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:14:32.964291 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Jul 7 00:14:34.370888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215187401.mount: Deactivated successfully. Jul 7 00:14:34.395549 containerd[1507]: time="2025-07-07T00:14:34.394735848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:34.406674 containerd[1507]: time="2025-07-07T00:14:34.406600345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:14:34.407621 containerd[1507]: time="2025-07-07T00:14:34.407578126Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:34.409422 containerd[1507]: time="2025-07-07T00:14:34.409376763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:34.410383 containerd[1507]: time="2025-07-07T00:14:34.410351870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.018956836s" Jul 7 00:14:34.410428 containerd[1507]: time="2025-07-07T00:14:34.410385824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:14:34.412326 containerd[1507]: time="2025-07-07T00:14:34.412190481Z" level=info msg="CreateContainer within sandbox \"42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:14:34.432615 containerd[1507]: time="2025-07-07T00:14:34.432536369Z" level=info msg="CreateContainer within sandbox \"42afc3880358c09726070ebb7111d0acc1d3b5806d66b24c59e4698409986de3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fa42a75242234d32f29b64194a8b21441b1fda96dd5564e0c85319ebfecc5452\"" Jul 7 00:14:34.433186 containerd[1507]: time="2025-07-07T00:14:34.433157895Z" level=info msg="StartContainer for \"fa42a75242234d32f29b64194a8b21441b1fda96dd5564e0c85319ebfecc5452\"" Jul 7 00:14:34.461302 systemd[1]: Started cri-containerd-fa42a75242234d32f29b64194a8b21441b1fda96dd5564e0c85319ebfecc5452.scope - libcontainer container fa42a75242234d32f29b64194a8b21441b1fda96dd5564e0c85319ebfecc5452. Jul 7 00:14:34.499698 containerd[1507]: time="2025-07-07T00:14:34.499656368Z" level=info msg="StartContainer for \"fa42a75242234d32f29b64194a8b21441b1fda96dd5564e0c85319ebfecc5452\" returns successfully" Jul 7 00:14:36.871264 containerd[1507]: time="2025-07-07T00:14:36.871052227Z" level=info msg="StopPodSandbox for \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\"" Jul 7 00:14:36.873099 containerd[1507]: time="2025-07-07T00:14:36.872063898Z" level=info msg="StopPodSandbox for \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\"" Jul 7 00:14:36.873416 containerd[1507]: time="2025-07-07T00:14:36.873384686Z" level=info msg="StopPodSandbox for \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\"" Jul 7 00:14:36.936293 kubelet[2712]: I0707 00:14:36.935435 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59659f6cf5-86nr7" podStartSLOduration=3.276011873 podStartE2EDuration="6.935418127s" podCreationTimestamp="2025-07-07 00:14:30 +0000 UTC" firstStartedPulling="2025-07-07 00:14:30.751763633 +0000 UTC m=+32.975076661" lastFinishedPulling="2025-07-07 00:14:34.411169878 +0000 UTC m=+36.634482915" observedRunningTime="2025-07-07 00:14:35.051889585 +0000 UTC m=+37.275202622" watchObservedRunningTime="2025-07-07 00:14:36.935418127 +0000 UTC m=+39.158731155" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.935 [INFO][4329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.937 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" iface="eth0" netns="/var/run/netns/cni-73dff50d-45bc-c7ec-edfc-9874a999c18a" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.937 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" iface="eth0" netns="/var/run/netns/cni-73dff50d-45bc-c7ec-edfc-9874a999c18a" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.937 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" iface="eth0" netns="/var/run/netns/cni-73dff50d-45bc-c7ec-edfc-9874a999c18a" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.937 [INFO][4329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.937 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.974 [INFO][4352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.974 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.975 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.979 [WARNING][4352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.979 [INFO][4352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.981 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:36.988392 containerd[1507]: 2025-07-07 00:14:36.983 [INFO][4329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:36.990057 containerd[1507]: time="2025-07-07T00:14:36.989419517Z" level=info msg="TearDown network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\" successfully" Jul 7 00:14:36.990944 systemd[1]: run-netns-cni\x2d73dff50d\x2d45bc\x2dc7ec\x2dedfc\x2d9874a999c18a.mount: Deactivated successfully. Jul 7 00:14:36.992476 containerd[1507]: time="2025-07-07T00:14:36.992179899Z" level=info msg="StopPodSandbox for \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\" returns successfully" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.945 [INFO][4336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.945 [INFO][4336] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" iface="eth0" netns="/var/run/netns/cni-f9f8642c-eb3e-f9a4-e53a-2e74cb622e1a" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.946 [INFO][4336] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" iface="eth0" netns="/var/run/netns/cni-f9f8642c-eb3e-f9a4-e53a-2e74cb622e1a" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.946 [INFO][4336] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" iface="eth0" netns="/var/run/netns/cni-f9f8642c-eb3e-f9a4-e53a-2e74cb622e1a" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.946 [INFO][4336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.946 [INFO][4336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.976 [INFO][4359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.978 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.981 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.987 [WARNING][4359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.987 [INFO][4359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.990 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:36.995382 containerd[1507]: 2025-07-07 00:14:36.992 [INFO][4336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:36.998088 containerd[1507]: time="2025-07-07T00:14:36.993979036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9qbxm,Uid:298451b0-5619-4a6f-8aad-35320d360358,Namespace:kube-system,Attempt:1,}" Jul 7 00:14:36.998088 containerd[1507]: time="2025-07-07T00:14:36.995798931Z" level=info msg="TearDown network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\" successfully" Jul 7 00:14:36.998088 containerd[1507]: time="2025-07-07T00:14:36.995814700Z" level=info msg="StopPodSandbox for \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\" returns successfully" Jul 7 00:14:36.998088 containerd[1507]: time="2025-07-07T00:14:36.998020821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc4856784-l6qqr,Uid:411d6daa-0c1d-48e1-908e-bd61e12d7879,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:14:36.996432 systemd[1]: run-netns-cni\x2df9f8642c\x2deb3e\x2df9a4\x2de53a\x2d2e74cb622e1a.mount: Deactivated successfully. Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.941 [INFO][4337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.941 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" iface="eth0" netns="/var/run/netns/cni-3cd74b0d-585f-cd41-660b-1e4fc0dd1e62" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.941 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" iface="eth0" netns="/var/run/netns/cni-3cd74b0d-585f-cd41-660b-1e4fc0dd1e62" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.942 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" iface="eth0" netns="/var/run/netns/cni-3cd74b0d-585f-cd41-660b-1e4fc0dd1e62" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.942 [INFO][4337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.942 [INFO][4337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.987 [INFO][4357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.989 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:36.990 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:37.002 [WARNING][4357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:37.002 [INFO][4357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:37.003 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:37.007681 containerd[1507]: 2025-07-07 00:14:37.005 [INFO][4337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:37.008546 containerd[1507]: time="2025-07-07T00:14:37.008052015Z" level=info msg="TearDown network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\" successfully" Jul 7 00:14:37.008546 containerd[1507]: time="2025-07-07T00:14:37.008101379Z" level=info msg="StopPodSandbox for \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\" returns successfully" Jul 7 00:14:37.012816 containerd[1507]: time="2025-07-07T00:14:37.012786447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc4856784-rg2k9,Uid:4afa15e9-a817-4cbf-8d61-a91ddd7b4568,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:14:37.015346 systemd[1]: run-netns-cni\x2d3cd74b0d\x2d585f\x2dcd41\x2d660b\x2d1e4fc0dd1e62.mount: Deactivated successfully. Jul 7 00:14:37.149483 systemd-networkd[1399]: cali2af1ee94e05: Link UP Jul 7 00:14:37.149652 systemd-networkd[1399]: cali2af1ee94e05: Gained carrier Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.081 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0 calico-apiserver-6dc4856784- calico-apiserver 4afa15e9-a817-4cbf-8d61-a91ddd7b4568 908 0 2025-07-07 00:14:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc4856784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-f-11cbdd5b1a calico-apiserver-6dc4856784-rg2k9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2af1ee94e05 [] [] }} ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-rg2k9" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.082 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-rg2k9" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.104 [INFO][4416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" HandleID="k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.104 [INFO][4416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" HandleID="k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-f-11cbdd5b1a", "pod":"calico-apiserver-6dc4856784-rg2k9", "timestamp":"2025-07-07 00:14:37.104511706 +0000 UTC"}, Hostname:"ci-4081-3-4-f-11cbdd5b1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.104 [INFO][4416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.104 [INFO][4416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.104 [INFO][4416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-f-11cbdd5b1a' Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.111 [INFO][4416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.120 [INFO][4416] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.125 [INFO][4416] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.127 [INFO][4416] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.129 [INFO][4416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.129 [INFO][4416] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.130 [INFO][4416] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5 Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.135 [INFO][4416] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.139 [INFO][4416] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.57.2/26] block=192.168.57.0/26 handle="k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.139 [INFO][4416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.2/26] handle="k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.139 [INFO][4416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:37.164014 containerd[1507]: 2025-07-07 00:14:37.139 [INFO][4416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.2/26] IPv6=[] ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" HandleID="k8s-pod-network.109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.166937 containerd[1507]: 2025-07-07 00:14:37.145 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-rg2k9" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0", GenerateName:"calico-apiserver-6dc4856784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4afa15e9-a817-4cbf-8d61-a91ddd7b4568", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc4856784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"", Pod:"calico-apiserver-6dc4856784-rg2k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af1ee94e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:37.166937 containerd[1507]: 2025-07-07 00:14:37.145 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.2/32] ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-rg2k9" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.166937 containerd[1507]: 2025-07-07 00:14:37.145 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2af1ee94e05 ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-rg2k9" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.166937 containerd[1507]: 2025-07-07 00:14:37.149 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-rg2k9" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.166937 containerd[1507]: 2025-07-07 00:14:37.150 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-rg2k9" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0", GenerateName:"calico-apiserver-6dc4856784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4afa15e9-a817-4cbf-8d61-a91ddd7b4568", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc4856784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5", Pod:"calico-apiserver-6dc4856784-rg2k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af1ee94e05", MAC:"3e:ae:a4:9e:f9:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:37.166937 containerd[1507]: 2025-07-07 00:14:37.160 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-rg2k9" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:37.189194 containerd[1507]: time="2025-07-07T00:14:37.189083986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:37.189194 containerd[1507]: time="2025-07-07T00:14:37.189192421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:37.189404 containerd[1507]: time="2025-07-07T00:14:37.189217359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:37.189404 containerd[1507]: time="2025-07-07T00:14:37.189312470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:37.205326 systemd[1]: Started cri-containerd-109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5.scope - libcontainer container 109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5. Jul 7 00:14:37.246368 containerd[1507]: time="2025-07-07T00:14:37.246335099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc4856784-rg2k9,Uid:4afa15e9-a817-4cbf-8d61-a91ddd7b4568,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5\"" Jul 7 00:14:37.252055 containerd[1507]: time="2025-07-07T00:14:37.251972156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:14:37.256648 systemd-networkd[1399]: calif4a6c20c7b7: Link UP Jul 7 00:14:37.258318 systemd-networkd[1399]: calif4a6c20c7b7: Gained carrier Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.074 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0 calico-apiserver-6dc4856784- calico-apiserver 411d6daa-0c1d-48e1-908e-bd61e12d7879 909 0 2025-07-07 00:14:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc4856784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-f-11cbdd5b1a calico-apiserver-6dc4856784-l6qqr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif4a6c20c7b7 [] [] }} ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-l6qqr" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.074 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-l6qqr" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.115 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" HandleID="k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.115 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" HandleID="k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-f-11cbdd5b1a", "pod":"calico-apiserver-6dc4856784-l6qqr", "timestamp":"2025-07-07 00:14:37.115767577 +0000 UTC"}, Hostname:"ci-4081-3-4-f-11cbdd5b1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.115 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.140 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.140 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-f-11cbdd5b1a' Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.211 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.218 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.225 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.227 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.230 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.230 [INFO][4408] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.232 [INFO][4408] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739 Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.236 [INFO][4408] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.247 [INFO][4408] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.57.3/26] block=192.168.57.0/26 handle="k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.248 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.3/26] handle="k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.248 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:37.272783 containerd[1507]: 2025-07-07 00:14:37.248 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.3/26] IPv6=[] ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" HandleID="k8s-pod-network.d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:37.273849 containerd[1507]: 2025-07-07 00:14:37.252 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-l6qqr" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0", GenerateName:"calico-apiserver-6dc4856784-", Namespace:"calico-apiserver", SelfLink:"", UID:"411d6daa-0c1d-48e1-908e-bd61e12d7879", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc4856784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"", Pod:"calico-apiserver-6dc4856784-l6qqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4a6c20c7b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:37.273849 containerd[1507]: 2025-07-07 00:14:37.252 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.3/32] ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-l6qqr" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:37.273849 containerd[1507]: 2025-07-07 00:14:37.252 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4a6c20c7b7 ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-l6qqr" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:37.273849 containerd[1507]: 2025-07-07 00:14:37.258 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-l6qqr" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:37.273849 containerd[1507]: 2025-07-07 00:14:37.259 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-l6qqr" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0", GenerateName:"calico-apiserver-6dc4856784-", Namespace:"calico-apiserver", SelfLink:"", UID:"411d6daa-0c1d-48e1-908e-bd61e12d7879", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc4856784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739", Pod:"calico-apiserver-6dc4856784-l6qqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4a6c20c7b7", MAC:"1e:9c:62:11:45:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:37.273849 containerd[1507]: 2025-07-07 00:14:37.268 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739" Namespace="calico-apiserver" Pod="calico-apiserver-6dc4856784-l6qqr" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:37.292125 containerd[1507]: time="2025-07-07T00:14:37.291802547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:37.292125 containerd[1507]: time="2025-07-07T00:14:37.291845979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:37.292125 containerd[1507]: time="2025-07-07T00:14:37.291856989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:37.292125 containerd[1507]: time="2025-07-07T00:14:37.291916653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:37.309286 systemd[1]: Started cri-containerd-d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739.scope - libcontainer container d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739. Jul 7 00:14:37.346468 containerd[1507]: time="2025-07-07T00:14:37.346430872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc4856784-l6qqr,Uid:411d6daa-0c1d-48e1-908e-bd61e12d7879,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739\"" Jul 7 00:14:37.356369 systemd-networkd[1399]: cali363398ec313: Link UP Jul 7 00:14:37.357287 systemd-networkd[1399]: cali363398ec313: Gained carrier Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.080 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0 coredns-668d6bf9bc- kube-system 298451b0-5619-4a6f-8aad-35320d360358 907 0 2025-07-07 00:14:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-f-11cbdd5b1a coredns-668d6bf9bc-9qbxm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali363398ec313 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Namespace="kube-system" Pod="coredns-668d6bf9bc-9qbxm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.080 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Namespace="kube-system" Pod="coredns-668d6bf9bc-9qbxm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.117 [INFO][4414] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" HandleID="k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.117 [INFO][4414] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" HandleID="k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-f-11cbdd5b1a", "pod":"coredns-668d6bf9bc-9qbxm", "timestamp":"2025-07-07 00:14:37.117732893 +0000 UTC"}, Hostname:"ci-4081-3-4-f-11cbdd5b1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.117 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.248 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.249 [INFO][4414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-f-11cbdd5b1a' Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.311 [INFO][4414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.318 [INFO][4414] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.326 [INFO][4414] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.328 [INFO][4414] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.331 [INFO][4414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.331 [INFO][4414] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.332 [INFO][4414] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224 Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.341 [INFO][4414] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.350 [INFO][4414] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.57.4/26] block=192.168.57.0/26 handle="k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.350 [INFO][4414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.4/26] handle="k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.350 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:37.369630 containerd[1507]: 2025-07-07 00:14:37.350 [INFO][4414] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.4/26] IPv6=[] ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" HandleID="k8s-pod-network.bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:37.371588 containerd[1507]: 2025-07-07 00:14:37.353 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Namespace="kube-system" Pod="coredns-668d6bf9bc-9qbxm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"298451b0-5619-4a6f-8aad-35320d360358", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"", Pod:"coredns-668d6bf9bc-9qbxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali363398ec313", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:37.371588 containerd[1507]: 2025-07-07 00:14:37.353 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.4/32] ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Namespace="kube-system" Pod="coredns-668d6bf9bc-9qbxm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:37.371588 containerd[1507]: 2025-07-07 00:14:37.353 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali363398ec313 ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Namespace="kube-system" Pod="coredns-668d6bf9bc-9qbxm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:37.371588 containerd[1507]: 2025-07-07 00:14:37.355 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Namespace="kube-system" Pod="coredns-668d6bf9bc-9qbxm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:37.371588 containerd[1507]: 2025-07-07 00:14:37.355 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Namespace="kube-system" Pod="coredns-668d6bf9bc-9qbxm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"298451b0-5619-4a6f-8aad-35320d360358", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224", Pod:"coredns-668d6bf9bc-9qbxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali363398ec313", MAC:"c6:65:a9:86:5e:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:37.371588 containerd[1507]: 2025-07-07 00:14:37.366 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224" Namespace="kube-system" Pod="coredns-668d6bf9bc-9qbxm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:37.384270 containerd[1507]: time="2025-07-07T00:14:37.384044949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:37.384270 containerd[1507]: time="2025-07-07T00:14:37.384082410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:37.384270 containerd[1507]: time="2025-07-07T00:14:37.384091397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:37.384270 containerd[1507]: time="2025-07-07T00:14:37.384180798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:37.402342 systemd[1]: Started cri-containerd-bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224.scope - libcontainer container bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224. Jul 7 00:14:37.447405 containerd[1507]: time="2025-07-07T00:14:37.447358970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9qbxm,Uid:298451b0-5619-4a6f-8aad-35320d360358,Namespace:kube-system,Attempt:1,} returns sandbox id \"bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224\"" Jul 7 00:14:37.450528 containerd[1507]: time="2025-07-07T00:14:37.450401295Z" level=info msg="CreateContainer within sandbox \"bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:14:37.466911 containerd[1507]: time="2025-07-07T00:14:37.466872180Z" level=info msg="CreateContainer within sandbox \"bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71beb3eab816d75df55b47855a040b2a37df165196b7c95e3503c5670abdb450\"" Jul 7 00:14:37.467959 containerd[1507]: time="2025-07-07T00:14:37.467370117Z" level=info msg="StartContainer for \"71beb3eab816d75df55b47855a040b2a37df165196b7c95e3503c5670abdb450\"" Jul 7 00:14:37.488270 systemd[1]: Started cri-containerd-71beb3eab816d75df55b47855a040b2a37df165196b7c95e3503c5670abdb450.scope - libcontainer container 71beb3eab816d75df55b47855a040b2a37df165196b7c95e3503c5670abdb450. Jul 7 00:14:37.509351 containerd[1507]: time="2025-07-07T00:14:37.509301162Z" level=info msg="StartContainer for \"71beb3eab816d75df55b47855a040b2a37df165196b7c95e3503c5670abdb450\" returns successfully" Jul 7 00:14:37.875602 containerd[1507]: time="2025-07-07T00:14:37.875378523Z" level=info msg="StopPodSandbox for \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\"" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.921 [INFO][4626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.923 [INFO][4626] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" iface="eth0" netns="/var/run/netns/cni-abec678e-0720-9798-cafb-e26422abebb5" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.923 [INFO][4626] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" iface="eth0" netns="/var/run/netns/cni-abec678e-0720-9798-cafb-e26422abebb5" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.923 [INFO][4626] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" iface="eth0" netns="/var/run/netns/cni-abec678e-0720-9798-cafb-e26422abebb5" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.923 [INFO][4626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.923 [INFO][4626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.941 [INFO][4633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.941 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.941 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.947 [WARNING][4633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.947 [INFO][4633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.949 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:37.952309 containerd[1507]: 2025-07-07 00:14:37.950 [INFO][4626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:37.953212 containerd[1507]: time="2025-07-07T00:14:37.952409804Z" level=info msg="TearDown network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\" successfully" Jul 7 00:14:37.953212 containerd[1507]: time="2025-07-07T00:14:37.952430223Z" level=info msg="StopPodSandbox for \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\" returns successfully" Jul 7 00:14:37.953212 containerd[1507]: time="2025-07-07T00:14:37.952900027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5qbjk,Uid:2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6,Namespace:calico-system,Attempt:1,}" Jul 7 00:14:38.003531 systemd[1]: run-netns-cni\x2dabec678e\x2d0720\x2d9798\x2dcafb\x2de26422abebb5.mount: Deactivated successfully. Jul 7 00:14:38.062762 systemd-networkd[1399]: calidba1a4c7d85: Link UP Jul 7 00:14:38.063974 systemd-networkd[1399]: calidba1a4c7d85: Gained carrier Jul 7 00:14:38.070106 kubelet[2712]: I0707 00:14:38.068665 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9qbxm" podStartSLOduration=35.068650278 podStartE2EDuration="35.068650278s" podCreationTimestamp="2025-07-07 00:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:38.068618969 +0000 UTC m=+40.291931995" watchObservedRunningTime="2025-07-07 00:14:38.068650278 +0000 UTC m=+40.291963305" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:37.986 [INFO][4639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0 csi-node-driver- calico-system 2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6 926 0 2025-07-07 00:14:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-f-11cbdd5b1a csi-node-driver-5qbjk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidba1a4c7d85 [] [] }} ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Namespace="calico-system" Pod="csi-node-driver-5qbjk" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:37.986 [INFO][4639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Namespace="calico-system" Pod="csi-node-driver-5qbjk" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.018 [INFO][4652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" HandleID="k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.018 [INFO][4652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" HandleID="k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-f-11cbdd5b1a", "pod":"csi-node-driver-5qbjk", "timestamp":"2025-07-07 00:14:38.01842889 +0000 UTC"}, Hostname:"ci-4081-3-4-f-11cbdd5b1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.018 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.018 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.018 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-f-11cbdd5b1a' Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.024 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.028 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.032 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.034 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.037 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.037 [INFO][4652] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.038 [INFO][4652] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.046 [INFO][4652] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.056 [INFO][4652] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.57.5/26] block=192.168.57.0/26 handle="k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.056 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.5/26] handle="k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.057 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:38.091537 containerd[1507]: 2025-07-07 00:14:38.057 [INFO][4652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.5/26] IPv6=[] ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" HandleID="k8s-pod-network.ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:38.092669 containerd[1507]: 2025-07-07 00:14:38.059 [INFO][4639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Namespace="calico-system" Pod="csi-node-driver-5qbjk" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"", Pod:"csi-node-driver-5qbjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba1a4c7d85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:38.092669 containerd[1507]: 2025-07-07 00:14:38.059 [INFO][4639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.5/32] ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Namespace="calico-system" Pod="csi-node-driver-5qbjk" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:38.092669 containerd[1507]: 2025-07-07 00:14:38.059 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidba1a4c7d85 ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Namespace="calico-system" Pod="csi-node-driver-5qbjk" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:38.092669 containerd[1507]: 2025-07-07 00:14:38.067 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Namespace="calico-system" Pod="csi-node-driver-5qbjk" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:38.092669 containerd[1507]: 2025-07-07 00:14:38.069 [INFO][4639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Namespace="calico-system" Pod="csi-node-driver-5qbjk" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d", Pod:"csi-node-driver-5qbjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba1a4c7d85", MAC:"16:0e:5c:d0:29:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:38.092669 containerd[1507]: 2025-07-07 00:14:38.088 [INFO][4639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d" Namespace="calico-system" Pod="csi-node-driver-5qbjk" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:38.114772 containerd[1507]: time="2025-07-07T00:14:38.114606456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:38.115392 containerd[1507]: time="2025-07-07T00:14:38.115219934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:38.115392 containerd[1507]: time="2025-07-07T00:14:38.115237938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:38.116349 containerd[1507]: time="2025-07-07T00:14:38.115336647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:38.144599 systemd[1]: Started cri-containerd-ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d.scope - libcontainer container ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d. Jul 7 00:14:38.186288 containerd[1507]: time="2025-07-07T00:14:38.186249024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5qbjk,Uid:2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6,Namespace:calico-system,Attempt:1,} returns sandbox id \"ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d\"" Jul 7 00:14:38.468362 systemd-networkd[1399]: cali2af1ee94e05: Gained IPv6LL Jul 7 00:14:38.660568 systemd-networkd[1399]: cali363398ec313: Gained IPv6LL Jul 7 00:14:38.852440 systemd-networkd[1399]: calif4a6c20c7b7: Gained IPv6LL Jul 7 00:14:39.330732 containerd[1507]: time="2025-07-07T00:14:39.330678739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:14:39.333187 containerd[1507]: time="2025-07-07T00:14:39.333048315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.081032716s" Jul 7 00:14:39.333187 containerd[1507]: time="2025-07-07T00:14:39.333074225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:14:39.334598 containerd[1507]: time="2025-07-07T00:14:39.334110099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:14:39.338760 containerd[1507]: time="2025-07-07T00:14:39.338699096Z" level=info msg="CreateContainer within sandbox \"109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:14:39.355925 containerd[1507]: time="2025-07-07T00:14:39.355876336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:39.356431 containerd[1507]: time="2025-07-07T00:14:39.356406438Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:39.356909 containerd[1507]: time="2025-07-07T00:14:39.356881142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:39.360373 containerd[1507]: time="2025-07-07T00:14:39.360345856Z" level=info msg="CreateContainer within sandbox \"109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ed888d4d22594be57265b1f6f3c970dfbcdf8b694b6e859ff0f4a6db245269bc\"" Jul 7 00:14:39.361445 containerd[1507]: time="2025-07-07T00:14:39.360701043Z" level=info msg="StartContainer for \"ed888d4d22594be57265b1f6f3c970dfbcdf8b694b6e859ff0f4a6db245269bc\"" Jul 7 00:14:39.395258 systemd[1]: Started cri-containerd-ed888d4d22594be57265b1f6f3c970dfbcdf8b694b6e859ff0f4a6db245269bc.scope - libcontainer container ed888d4d22594be57265b1f6f3c970dfbcdf8b694b6e859ff0f4a6db245269bc. Jul 7 00:14:39.426642 containerd[1507]: time="2025-07-07T00:14:39.426612163Z" level=info msg="StartContainer for \"ed888d4d22594be57265b1f6f3c970dfbcdf8b694b6e859ff0f4a6db245269bc\" returns successfully" Jul 7 00:14:39.557342 systemd-networkd[1399]: calidba1a4c7d85: Gained IPv6LL Jul 7 00:14:39.815402 containerd[1507]: time="2025-07-07T00:14:39.815347979Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:39.816173 containerd[1507]: time="2025-07-07T00:14:39.815930880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:14:39.817938 containerd[1507]: time="2025-07-07T00:14:39.817909791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 483.778933ms" Jul 7 00:14:39.818075 containerd[1507]: time="2025-07-07T00:14:39.817938256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:14:39.819235 containerd[1507]: time="2025-07-07T00:14:39.819160957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:14:39.830627 containerd[1507]: time="2025-07-07T00:14:39.830374600Z" level=info msg="CreateContainer within sandbox \"d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:14:39.851516 containerd[1507]: time="2025-07-07T00:14:39.851466343Z" level=info msg="CreateContainer within sandbox \"d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"96bf2c0bfea45f22beecc67b252d4064ba0411be0c672ae3551cf22a60525e93\"" Jul 7 00:14:39.852259 containerd[1507]: time="2025-07-07T00:14:39.852176997Z" level=info msg="StartContainer for \"96bf2c0bfea45f22beecc67b252d4064ba0411be0c672ae3551cf22a60525e93\"" Jul 7 00:14:39.880761 containerd[1507]: time="2025-07-07T00:14:39.879844824Z" level=info msg="StopPodSandbox for \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\"" Jul 7 00:14:39.880990 containerd[1507]: time="2025-07-07T00:14:39.880930142Z" level=info msg="StopPodSandbox for \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\"" Jul 7 00:14:39.885538 systemd[1]: Started cri-containerd-96bf2c0bfea45f22beecc67b252d4064ba0411be0c672ae3551cf22a60525e93.scope - libcontainer container 96bf2c0bfea45f22beecc67b252d4064ba0411be0c672ae3551cf22a60525e93. Jul 7 00:14:39.993291 containerd[1507]: time="2025-07-07T00:14:39.993256102Z" level=info msg="StartContainer for \"96bf2c0bfea45f22beecc67b252d4064ba0411be0c672ae3551cf22a60525e93\" returns successfully" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:39.972 [INFO][4799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:39.972 [INFO][4799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" iface="eth0" netns="/var/run/netns/cni-e5b81049-2b35-ecb0-1d46-e28016777b5a" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:39.975 [INFO][4799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" iface="eth0" netns="/var/run/netns/cni-e5b81049-2b35-ecb0-1d46-e28016777b5a" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:39.976 [INFO][4799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" iface="eth0" netns="/var/run/netns/cni-e5b81049-2b35-ecb0-1d46-e28016777b5a" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:39.976 [INFO][4799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:39.976 [INFO][4799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:40.018 [INFO][4821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:40.018 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:40.019 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:40.029 [WARNING][4821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:40.029 [INFO][4821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:40.034 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:40.041192 containerd[1507]: 2025-07-07 00:14:40.037 [INFO][4799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:40.044580 containerd[1507]: time="2025-07-07T00:14:40.043921832Z" level=info msg="TearDown network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\" successfully" Jul 7 00:14:40.044580 containerd[1507]: time="2025-07-07T00:14:40.043969803Z" level=info msg="StopPodSandbox for \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\" returns successfully" Jul 7 00:14:40.044839 containerd[1507]: time="2025-07-07T00:14:40.044823823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j6knh,Uid:d97012fa-4c0e-4428-b076-69d838ad32a3,Namespace:calico-system,Attempt:1,}" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:39.979 [INFO][4801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:39.979 [INFO][4801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" iface="eth0" netns="/var/run/netns/cni-88ac0f86-d863-3104-1901-172c60ef5c44" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:39.980 [INFO][4801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" iface="eth0" netns="/var/run/netns/cni-88ac0f86-d863-3104-1901-172c60ef5c44" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:39.984 [INFO][4801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" iface="eth0" netns="/var/run/netns/cni-88ac0f86-d863-3104-1901-172c60ef5c44" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:39.984 [INFO][4801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:39.984 [INFO][4801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:40.033 [INFO][4826] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:40.033 [INFO][4826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:40.034 [INFO][4826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:40.042 [WARNING][4826] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:40.042 [INFO][4826] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:40.045 [INFO][4826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:40.051200 containerd[1507]: 2025-07-07 00:14:40.048 [INFO][4801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:40.058849 containerd[1507]: time="2025-07-07T00:14:40.051449026Z" level=info msg="TearDown network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\" successfully" Jul 7 00:14:40.058849 containerd[1507]: time="2025-07-07T00:14:40.051464265Z" level=info msg="StopPodSandbox for \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\" returns successfully" Jul 7 00:14:40.058849 containerd[1507]: time="2025-07-07T00:14:40.051837567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tskgm,Uid:41c99768-274d-463a-99f4-28ba08a6a5e5,Namespace:kube-system,Attempt:1,}" Jul 7 00:14:40.116434 kubelet[2712]: I0707 00:14:40.115132 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dc4856784-rg2k9" podStartSLOduration=18.031091404 podStartE2EDuration="20.115113538s" podCreationTimestamp="2025-07-07 00:14:20 +0000 UTC" firstStartedPulling="2025-07-07 00:14:37.249742036 +0000 UTC m=+39.473055063" lastFinishedPulling="2025-07-07 00:14:39.333764169 +0000 UTC m=+41.557077197" observedRunningTime="2025-07-07 00:14:40.088732755 +0000 UTC m=+42.312045782" watchObservedRunningTime="2025-07-07 00:14:40.115113538 +0000 UTC m=+42.338426565" Jul 7 00:14:40.253002 systemd-networkd[1399]: cali7ff605079c4: Link UP Jul 7 00:14:40.253487 systemd-networkd[1399]: cali7ff605079c4: Gained carrier Jul 7 00:14:40.265844 kubelet[2712]: I0707 00:14:40.265446 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dc4856784-l6qqr" podStartSLOduration=17.795727364 podStartE2EDuration="20.265429459s" podCreationTimestamp="2025-07-07 00:14:20 +0000 UTC" firstStartedPulling="2025-07-07 00:14:37.348949842 +0000 UTC m=+39.572262868" lastFinishedPulling="2025-07-07 00:14:39.818651936 +0000 UTC m=+42.041964963" observedRunningTime="2025-07-07 00:14:40.11735741 +0000 UTC m=+42.340670448" watchObservedRunningTime="2025-07-07 00:14:40.265429459 +0000 UTC m=+42.488742487" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.183 [INFO][4846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0 coredns-668d6bf9bc- kube-system 41c99768-274d-463a-99f4-28ba08a6a5e5 951 0 2025-07-07 00:14:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-f-11cbdd5b1a coredns-668d6bf9bc-tskgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ff605079c4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tskgm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.183 [INFO][4846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tskgm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.214 [INFO][4878] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" HandleID="k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.214 [INFO][4878] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" HandleID="k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-f-11cbdd5b1a", "pod":"coredns-668d6bf9bc-tskgm", "timestamp":"2025-07-07 00:14:40.214469201 +0000 UTC"}, Hostname:"ci-4081-3-4-f-11cbdd5b1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.214 [INFO][4878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.214 [INFO][4878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.214 [INFO][4878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-f-11cbdd5b1a' Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.221 [INFO][4878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.227 [INFO][4878] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.231 [INFO][4878] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.232 [INFO][4878] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.234 [INFO][4878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.234 [INFO][4878] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.235 [INFO][4878] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0 Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.241 [INFO][4878] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.247 [INFO][4878] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.57.6/26] block=192.168.57.0/26 handle="k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.247 [INFO][4878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.6/26] handle="k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.247 [INFO][4878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:40.272611 containerd[1507]: 2025-07-07 00:14:40.247 [INFO][4878] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.6/26] IPv6=[] ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" HandleID="k8s-pod-network.1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.274137 containerd[1507]: 2025-07-07 00:14:40.250 [INFO][4846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tskgm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"41c99768-274d-463a-99f4-28ba08a6a5e5", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"", Pod:"coredns-668d6bf9bc-tskgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff605079c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:40.274137 containerd[1507]: 2025-07-07 00:14:40.250 [INFO][4846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.6/32] ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tskgm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.274137 containerd[1507]: 2025-07-07 00:14:40.250 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ff605079c4 ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tskgm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.274137 containerd[1507]: 2025-07-07 00:14:40.254 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tskgm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.274137 containerd[1507]: 2025-07-07 00:14:40.254 [INFO][4846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tskgm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"41c99768-274d-463a-99f4-28ba08a6a5e5", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0", Pod:"coredns-668d6bf9bc-tskgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff605079c4", MAC:"56:b0:83:7c:db:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:40.274137 containerd[1507]: 2025-07-07 00:14:40.267 [INFO][4846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tskgm" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:40.315169 containerd[1507]: time="2025-07-07T00:14:40.314796237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:40.315488 containerd[1507]: time="2025-07-07T00:14:40.315434555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:40.315606 containerd[1507]: time="2025-07-07T00:14:40.315562610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:40.317508 containerd[1507]: time="2025-07-07T00:14:40.316812805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:40.338504 systemd[1]: Started cri-containerd-1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0.scope - libcontainer container 1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0. Jul 7 00:14:40.351892 systemd[1]: run-netns-cni\x2d88ac0f86\x2dd863\x2d3104\x2d1901\x2d172c60ef5c44.mount: Deactivated successfully. Jul 7 00:14:40.351959 systemd[1]: run-netns-cni\x2de5b81049\x2d2b35\x2decb0\x2d1d46\x2de28016777b5a.mount: Deactivated successfully. Jul 7 00:14:40.383596 systemd-networkd[1399]: cali82dcf7b5a49: Link UP Jul 7 00:14:40.386464 systemd-networkd[1399]: cali82dcf7b5a49: Gained carrier Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.176 [INFO][4841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0 goldmane-768f4c5c69- calico-system d97012fa-4c0e-4428-b076-69d838ad32a3 950 0 2025-07-07 00:14:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-f-11cbdd5b1a goldmane-768f4c5c69-j6knh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali82dcf7b5a49 [] [] }} ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Namespace="calico-system" Pod="goldmane-768f4c5c69-j6knh" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.176 [INFO][4841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Namespace="calico-system" Pod="goldmane-768f4c5c69-j6knh" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.223 [INFO][4873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" HandleID="k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.225 [INFO][4873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" HandleID="k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-f-11cbdd5b1a", "pod":"goldmane-768f4c5c69-j6knh", "timestamp":"2025-07-07 00:14:40.223059744 +0000 UTC"}, Hostname:"ci-4081-3-4-f-11cbdd5b1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.225 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.247 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.247 [INFO][4873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-f-11cbdd5b1a' Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.323 [INFO][4873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.336 [INFO][4873] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.343 [INFO][4873] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.347 [INFO][4873] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.350 [INFO][4873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.350 [INFO][4873] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.358 [INFO][4873] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7 Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.364 [INFO][4873] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.371 [INFO][4873] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.57.7/26] block=192.168.57.0/26 handle="k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.371 [INFO][4873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.7/26] handle="k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.371 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:40.408352 containerd[1507]: 2025-07-07 00:14:40.371 [INFO][4873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.7/26] IPv6=[] ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" HandleID="k8s-pod-network.317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.411375 containerd[1507]: 2025-07-07 00:14:40.374 [INFO][4841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Namespace="calico-system" Pod="goldmane-768f4c5c69-j6knh" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d97012fa-4c0e-4428-b076-69d838ad32a3", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"", Pod:"goldmane-768f4c5c69-j6knh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali82dcf7b5a49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:40.411375 containerd[1507]: 2025-07-07 00:14:40.374 [INFO][4841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.7/32] ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Namespace="calico-system" Pod="goldmane-768f4c5c69-j6knh" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.411375 containerd[1507]: 2025-07-07 00:14:40.374 [INFO][4841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82dcf7b5a49 ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Namespace="calico-system" Pod="goldmane-768f4c5c69-j6knh" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.411375 containerd[1507]: 2025-07-07 00:14:40.385 [INFO][4841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Namespace="calico-system" Pod="goldmane-768f4c5c69-j6knh" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.411375 containerd[1507]: 2025-07-07 00:14:40.387 [INFO][4841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Namespace="calico-system" Pod="goldmane-768f4c5c69-j6knh" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d97012fa-4c0e-4428-b076-69d838ad32a3", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7", Pod:"goldmane-768f4c5c69-j6knh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali82dcf7b5a49", MAC:"c6:dc:cb:1e:8d:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:40.411375 containerd[1507]: 2025-07-07 00:14:40.401 [INFO][4841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7" Namespace="calico-system" Pod="goldmane-768f4c5c69-j6knh" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:40.428882 containerd[1507]: time="2025-07-07T00:14:40.428454083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tskgm,Uid:41c99768-274d-463a-99f4-28ba08a6a5e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0\"" Jul 7 00:14:40.438449 containerd[1507]: time="2025-07-07T00:14:40.437529662Z" level=info msg="CreateContainer within sandbox \"1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:14:40.447821 containerd[1507]: time="2025-07-07T00:14:40.446570354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:40.447821 containerd[1507]: time="2025-07-07T00:14:40.447215184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:40.447821 containerd[1507]: time="2025-07-07T00:14:40.447225215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:40.447821 containerd[1507]: time="2025-07-07T00:14:40.447304064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:40.475693 systemd[1]: run-containerd-runc-k8s.io-317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7-runc.UBKSQ6.mount: Deactivated successfully. Jul 7 00:14:40.485239 systemd[1]: Started cri-containerd-317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7.scope - libcontainer container 317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7. Jul 7 00:14:40.492085 containerd[1507]: time="2025-07-07T00:14:40.492062378Z" level=info msg="CreateContainer within sandbox \"1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e3f1238c753c154b6412fb5d408cd6ce864ca85127676d1e0fae580d529dd3e\"" Jul 7 00:14:40.493095 containerd[1507]: time="2025-07-07T00:14:40.493044332Z" level=info msg="StartContainer for \"7e3f1238c753c154b6412fb5d408cd6ce864ca85127676d1e0fae580d529dd3e\"" Jul 7 00:14:40.513249 systemd[1]: Started cri-containerd-7e3f1238c753c154b6412fb5d408cd6ce864ca85127676d1e0fae580d529dd3e.scope - libcontainer container 7e3f1238c753c154b6412fb5d408cd6ce864ca85127676d1e0fae580d529dd3e. Jul 7 00:14:40.536254 containerd[1507]: time="2025-07-07T00:14:40.536219875Z" level=info msg="StartContainer for \"7e3f1238c753c154b6412fb5d408cd6ce864ca85127676d1e0fae580d529dd3e\" returns successfully" Jul 7 00:14:40.584077 containerd[1507]: time="2025-07-07T00:14:40.584043086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j6knh,Uid:d97012fa-4c0e-4428-b076-69d838ad32a3,Namespace:calico-system,Attempt:1,} returns sandbox id \"317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7\"" Jul 7 00:14:40.870899 containerd[1507]: time="2025-07-07T00:14:40.870769663Z" level=info msg="StopPodSandbox for \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\"" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.916 [INFO][5037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.916 [INFO][5037] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" iface="eth0" netns="/var/run/netns/cni-bb2b92bc-9570-d659-9e40-13412adccd24" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.917 [INFO][5037] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" iface="eth0" netns="/var/run/netns/cni-bb2b92bc-9570-d659-9e40-13412adccd24" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.917 [INFO][5037] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" iface="eth0" netns="/var/run/netns/cni-bb2b92bc-9570-d659-9e40-13412adccd24" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.917 [INFO][5037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.917 [INFO][5037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.940 [INFO][5044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.940 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.940 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.945 [WARNING][5044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.945 [INFO][5044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.946 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:40.950547 containerd[1507]: 2025-07-07 00:14:40.948 [INFO][5037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:40.950547 containerd[1507]: time="2025-07-07T00:14:40.949965935Z" level=info msg="TearDown network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\" successfully" Jul 7 00:14:40.950547 containerd[1507]: time="2025-07-07T00:14:40.949989220Z" level=info msg="StopPodSandbox for \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\" returns successfully" Jul 7 00:14:40.953044 containerd[1507]: time="2025-07-07T00:14:40.952423114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c694d7bd6-rbrgf,Uid:02a7d5a0-fb36-4269-a981-53d0ee8cb78e,Namespace:calico-system,Attempt:1,}" Jul 7 00:14:41.066486 systemd-networkd[1399]: calib4c33e8d846: Link UP Jul 7 00:14:41.066945 systemd-networkd[1399]: calib4c33e8d846: Gained carrier Jul 7 00:14:41.090174 kubelet[2712]: I0707 00:14:41.086805 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:14:41.090743 kubelet[2712]: I0707 00:14:41.090723 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:40.997 [INFO][5050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0 calico-kube-controllers-c694d7bd6- calico-system 02a7d5a0-fb36-4269-a981-53d0ee8cb78e 972 0 2025-07-07 00:14:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c694d7bd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-f-11cbdd5b1a calico-kube-controllers-c694d7bd6-rbrgf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib4c33e8d846 [] [] }} ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Namespace="calico-system" Pod="calico-kube-controllers-c694d7bd6-rbrgf" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:40.997 [INFO][5050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Namespace="calico-system" Pod="calico-kube-controllers-c694d7bd6-rbrgf" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.023 [INFO][5063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" HandleID="k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.023 [INFO][5063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" HandleID="k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-f-11cbdd5b1a", "pod":"calico-kube-controllers-c694d7bd6-rbrgf", "timestamp":"2025-07-07 00:14:41.023043759 +0000 UTC"}, Hostname:"ci-4081-3-4-f-11cbdd5b1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.023 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.023 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.023 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-f-11cbdd5b1a' Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.030 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.035 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.039 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.042 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.044 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.044 [INFO][5063] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.048 [INFO][5063] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443 Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.052 [INFO][5063] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.058 [INFO][5063] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.57.8/26] block=192.168.57.0/26 handle="k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.058 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.8/26] handle="k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" host="ci-4081-3-4-f-11cbdd5b1a" Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.058 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:41.091259 containerd[1507]: 2025-07-07 00:14:41.058 [INFO][5063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.8/26] IPv6=[] ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" HandleID="k8s-pod-network.b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:41.091749 containerd[1507]: 2025-07-07 00:14:41.062 [INFO][5050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Namespace="calico-system" Pod="calico-kube-controllers-c694d7bd6-rbrgf" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0", GenerateName:"calico-kube-controllers-c694d7bd6-", Namespace:"calico-system", SelfLink:"", UID:"02a7d5a0-fb36-4269-a981-53d0ee8cb78e", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c694d7bd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"", Pod:"calico-kube-controllers-c694d7bd6-rbrgf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4c33e8d846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:41.091749 containerd[1507]: 2025-07-07 00:14:41.062 [INFO][5050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.8/32] ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Namespace="calico-system" Pod="calico-kube-controllers-c694d7bd6-rbrgf" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:41.091749 containerd[1507]: 2025-07-07 00:14:41.062 [INFO][5050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4c33e8d846 ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Namespace="calico-system" Pod="calico-kube-controllers-c694d7bd6-rbrgf" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:41.091749 containerd[1507]: 2025-07-07 00:14:41.066 [INFO][5050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Namespace="calico-system" Pod="calico-kube-controllers-c694d7bd6-rbrgf" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:41.091749 containerd[1507]: 2025-07-07 00:14:41.069 [INFO][5050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Namespace="calico-system" Pod="calico-kube-controllers-c694d7bd6-rbrgf" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0", GenerateName:"calico-kube-controllers-c694d7bd6-", Namespace:"calico-system", SelfLink:"", UID:"02a7d5a0-fb36-4269-a981-53d0ee8cb78e", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c694d7bd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443", Pod:"calico-kube-controllers-c694d7bd6-rbrgf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4c33e8d846", MAC:"fe:14:0e:c0:78:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:41.091749 containerd[1507]: 2025-07-07 00:14:41.086 [INFO][5050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443" Namespace="calico-system" Pod="calico-kube-controllers-c694d7bd6-rbrgf" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:41.100160 kubelet[2712]: I0707 00:14:41.099869 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tskgm" podStartSLOduration=38.099854146 podStartE2EDuration="38.099854146s" podCreationTimestamp="2025-07-07 00:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:41.099117198 +0000 UTC m=+43.322430225" watchObservedRunningTime="2025-07-07 00:14:41.099854146 +0000 UTC m=+43.323167173" Jul 7 00:14:41.162217 containerd[1507]: time="2025-07-07T00:14:41.160380974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:14:41.162217 containerd[1507]: time="2025-07-07T00:14:41.161689093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:14:41.162441 containerd[1507]: time="2025-07-07T00:14:41.161701346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:41.162441 containerd[1507]: time="2025-07-07T00:14:41.161821176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:14:41.191261 systemd[1]: Started cri-containerd-b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443.scope - libcontainer container b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443. Jul 7 00:14:41.234236 containerd[1507]: time="2025-07-07T00:14:41.234206829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c694d7bd6-rbrgf,Uid:02a7d5a0-fb36-4269-a981-53d0ee8cb78e,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443\"" Jul 7 00:14:41.349320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367660059.mount: Deactivated successfully. Jul 7 00:14:41.349660 systemd[1]: run-netns-cni\x2dbb2b92bc\x2d9570\x2dd659\x2d9e40\x2d13412adccd24.mount: Deactivated successfully. Jul 7 00:14:41.446870 containerd[1507]: time="2025-07-07T00:14:41.446765755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:41.447953 containerd[1507]: time="2025-07-07T00:14:41.447913197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:14:41.448991 containerd[1507]: time="2025-07-07T00:14:41.448950519Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:41.450647 containerd[1507]: time="2025-07-07T00:14:41.450611332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:41.451489 containerd[1507]: time="2025-07-07T00:14:41.451316139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.632125045s" Jul 7 00:14:41.451489 containerd[1507]: time="2025-07-07T00:14:41.451350204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:14:41.452481 containerd[1507]: time="2025-07-07T00:14:41.452331929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:14:41.456699 containerd[1507]: time="2025-07-07T00:14:41.456669446Z" level=info msg="CreateContainer within sandbox \"ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:14:41.483346 containerd[1507]: time="2025-07-07T00:14:41.483292850Z" level=info msg="CreateContainer within sandbox \"ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e1344e6eee2c81726f16813fb804f36e9d68a1d29786bb8afade744d293228e3\"" Jul 7 00:14:41.483960 containerd[1507]: time="2025-07-07T00:14:41.483929157Z" level=info msg="StartContainer for \"e1344e6eee2c81726f16813fb804f36e9d68a1d29786bb8afade744d293228e3\"" Jul 7 00:14:41.530313 systemd[1]: Started cri-containerd-e1344e6eee2c81726f16813fb804f36e9d68a1d29786bb8afade744d293228e3.scope - libcontainer container e1344e6eee2c81726f16813fb804f36e9d68a1d29786bb8afade744d293228e3. Jul 7 00:14:41.554231 containerd[1507]: time="2025-07-07T00:14:41.554132450Z" level=info msg="StartContainer for \"e1344e6eee2c81726f16813fb804f36e9d68a1d29786bb8afade744d293228e3\" returns successfully" Jul 7 00:14:41.860349 systemd-networkd[1399]: cali82dcf7b5a49: Gained IPv6LL Jul 7 00:14:42.116795 systemd-networkd[1399]: cali7ff605079c4: Gained IPv6LL Jul 7 00:14:42.244353 systemd-networkd[1399]: calib4c33e8d846: Gained IPv6LL Jul 7 00:14:45.495683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267037414.mount: Deactivated successfully. Jul 7 00:14:45.863342 containerd[1507]: time="2025-07-07T00:14:45.863228886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:45.864751 containerd[1507]: time="2025-07-07T00:14:45.864711189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:14:45.865947 containerd[1507]: time="2025-07-07T00:14:45.865894019Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:45.872825 containerd[1507]: time="2025-07-07T00:14:45.872767517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:45.873681 containerd[1507]: time="2025-07-07T00:14:45.873276544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.420915768s" Jul 7 00:14:45.873681 containerd[1507]: time="2025-07-07T00:14:45.873301181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:14:45.882437 containerd[1507]: time="2025-07-07T00:14:45.882347357Z" level=info msg="CreateContainer within sandbox \"317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:14:45.882505 containerd[1507]: time="2025-07-07T00:14:45.882481054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:14:45.911998 containerd[1507]: time="2025-07-07T00:14:45.911903904Z" level=info msg="CreateContainer within sandbox \"317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78\"" Jul 7 00:14:45.925600 containerd[1507]: time="2025-07-07T00:14:45.925538664Z" level=info msg="StartContainer for \"cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78\"" Jul 7 00:14:45.980277 systemd[1]: Started cri-containerd-cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78.scope - libcontainer container cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78. Jul 7 00:14:46.044335 containerd[1507]: time="2025-07-07T00:14:46.043873183Z" level=info msg="StartContainer for \"cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78\" returns successfully" Jul 7 00:14:46.305169 kubelet[2712]: I0707 00:14:46.304944 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-j6knh" podStartSLOduration=26.01188772 podStartE2EDuration="31.296632078s" podCreationTimestamp="2025-07-07 00:14:15 +0000 UTC" firstStartedPulling="2025-07-07 00:14:40.589198084 +0000 UTC m=+42.812511111" lastFinishedPulling="2025-07-07 00:14:45.873942442 +0000 UTC m=+48.097255469" observedRunningTime="2025-07-07 00:14:46.268997066 +0000 UTC m=+48.492310163" watchObservedRunningTime="2025-07-07 00:14:46.296632078 +0000 UTC m=+48.519945115" Jul 7 00:14:47.153723 kubelet[2712]: I0707 00:14:47.153676 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:14:48.464398 containerd[1507]: time="2025-07-07T00:14:48.464345206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:48.465345 containerd[1507]: time="2025-07-07T00:14:48.465311085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:14:48.466563 containerd[1507]: time="2025-07-07T00:14:48.466093299Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:48.468246 containerd[1507]: time="2025-07-07T00:14:48.468226314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:48.468767 containerd[1507]: time="2025-07-07T00:14:48.468749059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.586246935s" Jul 7 00:14:48.468852 containerd[1507]: time="2025-07-07T00:14:48.468839343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:14:48.475935 containerd[1507]: time="2025-07-07T00:14:48.475919736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:14:48.610505 containerd[1507]: time="2025-07-07T00:14:48.610454766Z" level=info msg="CreateContainer within sandbox \"b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:14:48.621964 containerd[1507]: time="2025-07-07T00:14:48.621416977Z" level=info msg="CreateContainer within sandbox \"b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb\"" Jul 7 00:14:48.623058 containerd[1507]: time="2025-07-07T00:14:48.622278204Z" level=info msg="StartContainer for \"c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb\"" Jul 7 00:14:48.686373 systemd[1]: Started cri-containerd-c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb.scope - libcontainer container c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb. Jul 7 00:14:48.732846 containerd[1507]: time="2025-07-07T00:14:48.732722373Z" level=info msg="StartContainer for \"c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb\" returns successfully" Jul 7 00:14:49.270795 kubelet[2712]: I0707 00:14:49.270717 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c694d7bd6-rbrgf" podStartSLOduration=26.023181968 podStartE2EDuration="33.263459522s" podCreationTimestamp="2025-07-07 00:14:16 +0000 UTC" firstStartedPulling="2025-07-07 00:14:41.235498246 +0000 UTC m=+43.458811272" lastFinishedPulling="2025-07-07 00:14:48.475775799 +0000 UTC m=+50.699088826" observedRunningTime="2025-07-07 00:14:49.242236281 +0000 UTC m=+51.465549308" watchObservedRunningTime="2025-07-07 00:14:49.263459522 +0000 UTC m=+51.486772559" Jul 7 00:14:50.133202 containerd[1507]: time="2025-07-07T00:14:50.133122342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:50.134217 containerd[1507]: time="2025-07-07T00:14:50.134170683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:14:50.135696 containerd[1507]: time="2025-07-07T00:14:50.135661537Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:50.137618 containerd[1507]: time="2025-07-07T00:14:50.137577209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:50.138251 containerd[1507]: time="2025-07-07T00:14:50.138039961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.661928585s" Jul 7 00:14:50.138251 containerd[1507]: time="2025-07-07T00:14:50.138068645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:14:50.147910 containerd[1507]: time="2025-07-07T00:14:50.147820882Z" level=info msg="CreateContainer within sandbox \"ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:14:50.162438 containerd[1507]: time="2025-07-07T00:14:50.162385163Z" level=info msg="CreateContainer within sandbox \"ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6af188c03a94bd21b670d46a68ce7e00089396f907b79fdd535d0ad9d0d509cb\"" Jul 7 00:14:50.177358 containerd[1507]: time="2025-07-07T00:14:50.176012197Z" level=info msg="StartContainer for \"6af188c03a94bd21b670d46a68ce7e00089396f907b79fdd535d0ad9d0d509cb\"" Jul 7 00:14:50.234308 systemd[1]: Started cri-containerd-6af188c03a94bd21b670d46a68ce7e00089396f907b79fdd535d0ad9d0d509cb.scope - libcontainer container 6af188c03a94bd21b670d46a68ce7e00089396f907b79fdd535d0ad9d0d509cb. Jul 7 00:14:50.261112 containerd[1507]: time="2025-07-07T00:14:50.261053664Z" level=info msg="StartContainer for \"6af188c03a94bd21b670d46a68ce7e00089396f907b79fdd535d0ad9d0d509cb\" returns successfully" Jul 7 00:14:51.044006 kubelet[2712]: I0707 00:14:51.037836 2712 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:14:51.048241 kubelet[2712]: I0707 00:14:51.048218 2712 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:14:51.332740 kubelet[2712]: I0707 00:14:51.332112 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5qbjk" podStartSLOduration=23.379312115 podStartE2EDuration="35.332095107s" podCreationTimestamp="2025-07-07 00:14:16 +0000 UTC" firstStartedPulling="2025-07-07 00:14:38.187714555 +0000 UTC m=+40.411027582" lastFinishedPulling="2025-07-07 00:14:50.140497547 +0000 UTC m=+52.363810574" observedRunningTime="2025-07-07 00:14:51.330969566 +0000 UTC m=+53.554282593" watchObservedRunningTime="2025-07-07 00:14:51.332095107 +0000 UTC m=+53.555408165" Jul 7 00:14:52.874123 kubelet[2712]: I0707 00:14:52.873749 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:14:53.168809 systemd[1]: run-containerd-runc-k8s.io-cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78-runc.SuqONB.mount: Deactivated successfully. Jul 7 00:14:57.962543 containerd[1507]: time="2025-07-07T00:14:57.962482350Z" level=info msg="StopPodSandbox for \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\"" Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.154 [WARNING][5394] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d97012fa-4c0e-4428-b076-69d838ad32a3", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7", Pod:"goldmane-768f4c5c69-j6knh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali82dcf7b5a49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.157 [INFO][5394] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.157 [INFO][5394] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" iface="eth0" netns="" Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.157 [INFO][5394] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.157 [INFO][5394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.287 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.290 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.290 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.299 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.299 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.301 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.305298 containerd[1507]: 2025-07-07 00:14:58.303 [INFO][5394] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:58.309132 containerd[1507]: time="2025-07-07T00:14:58.305326219Z" level=info msg="TearDown network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\" successfully" Jul 7 00:14:58.309132 containerd[1507]: time="2025-07-07T00:14:58.305359553Z" level=info msg="StopPodSandbox for \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\" returns successfully" Jul 7 00:14:58.369120 containerd[1507]: time="2025-07-07T00:14:58.369072300Z" level=info msg="RemovePodSandbox for \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\"" Jul 7 00:14:58.370753 containerd[1507]: time="2025-07-07T00:14:58.370717830Z" level=info msg="Forcibly stopping sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\"" Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.396 [WARNING][5415] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d97012fa-4c0e-4428-b076-69d838ad32a3", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"317d20237847d8ccff939870444a4e51db4e62ca554ea16ea78dc97495bddbb7", Pod:"goldmane-768f4c5c69-j6knh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali82dcf7b5a49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.397 [INFO][5415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.397 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" iface="eth0" netns="" Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.397 [INFO][5415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.397 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.414 [INFO][5423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.415 [INFO][5423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.415 [INFO][5423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.423 [WARNING][5423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.423 [INFO][5423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" HandleID="k8s-pod-network.7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-goldmane--768f4c5c69--j6knh-eth0" Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.425 [INFO][5423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.430006 containerd[1507]: 2025-07-07 00:14:58.427 [INFO][5415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b" Jul 7 00:14:58.431620 containerd[1507]: time="2025-07-07T00:14:58.430058841Z" level=info msg="TearDown network for sandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\" successfully" Jul 7 00:14:58.440247 containerd[1507]: time="2025-07-07T00:14:58.440201225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:14:58.445708 containerd[1507]: time="2025-07-07T00:14:58.445681490Z" level=info msg="RemovePodSandbox \"7bbc10f5cc4c86b7d30e80f0470b757614f7bab64b6a26c056dc6ee4eeb08a5b\" returns successfully" Jul 7 00:14:58.451195 containerd[1507]: time="2025-07-07T00:14:58.451131678Z" level=info msg="StopPodSandbox for \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\"" Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.482 [WARNING][5437] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0", GenerateName:"calico-kube-controllers-c694d7bd6-", Namespace:"calico-system", SelfLink:"", UID:"02a7d5a0-fb36-4269-a981-53d0ee8cb78e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c694d7bd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443", Pod:"calico-kube-controllers-c694d7bd6-rbrgf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4c33e8d846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.482 [INFO][5437] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.482 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" iface="eth0" netns="" Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.482 [INFO][5437] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.483 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.508 [INFO][5450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.508 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.508 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.513 [WARNING][5450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.513 [INFO][5450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.515 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.518977 containerd[1507]: 2025-07-07 00:14:58.517 [INFO][5437] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:58.520036 containerd[1507]: time="2025-07-07T00:14:58.519021573Z" level=info msg="TearDown network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\" successfully" Jul 7 00:14:58.520036 containerd[1507]: time="2025-07-07T00:14:58.519042433Z" level=info msg="StopPodSandbox for \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\" returns successfully" Jul 7 00:14:58.522029 containerd[1507]: time="2025-07-07T00:14:58.521999266Z" level=info msg="RemovePodSandbox for \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\"" Jul 7 00:14:58.522029 containerd[1507]: time="2025-07-07T00:14:58.522027331Z" level=info msg="Forcibly stopping sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\"" Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.555 [WARNING][5477] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0", GenerateName:"calico-kube-controllers-c694d7bd6-", Namespace:"calico-system", SelfLink:"", UID:"02a7d5a0-fb36-4269-a981-53d0ee8cb78e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c694d7bd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"b4cfca4295a7e87df8f4da838eb03d8f51b7259d0d6b8018448ee867426c4443", Pod:"calico-kube-controllers-c694d7bd6-rbrgf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4c33e8d846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.555 [INFO][5477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.556 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" iface="eth0" netns="" Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.556 [INFO][5477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.556 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.579 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.580 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.580 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.585 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.585 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" HandleID="k8s-pod-network.a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--kube--controllers--c694d7bd6--rbrgf-eth0" Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.587 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.591441 containerd[1507]: 2025-07-07 00:14:58.589 [INFO][5477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71" Jul 7 00:14:58.591441 containerd[1507]: time="2025-07-07T00:14:58.591403187Z" level=info msg="TearDown network for sandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\" successfully" Jul 7 00:14:58.595899 containerd[1507]: time="2025-07-07T00:14:58.595746378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:14:58.595899 containerd[1507]: time="2025-07-07T00:14:58.595806995Z" level=info msg="RemovePodSandbox \"a627654c21016a71742e06660038ed4697c33dfcc35e2c5b4213c850f5591c71\" returns successfully" Jul 7 00:14:58.600176 containerd[1507]: time="2025-07-07T00:14:58.599971289Z" level=info msg="StopPodSandbox for \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\"" Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.631 [WARNING][5500] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"41c99768-274d-463a-99f4-28ba08a6a5e5", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0", Pod:"coredns-668d6bf9bc-tskgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff605079c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.632 [INFO][5500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.632 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" iface="eth0" netns="" Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.632 [INFO][5500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.632 [INFO][5500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.649 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.649 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.650 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.655 [WARNING][5510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.655 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.657 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.660388 containerd[1507]: 2025-07-07 00:14:58.658 [INFO][5500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:58.661576 containerd[1507]: time="2025-07-07T00:14:58.660642078Z" level=info msg="TearDown network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\" successfully" Jul 7 00:14:58.661576 containerd[1507]: time="2025-07-07T00:14:58.660665312Z" level=info msg="StopPodSandbox for \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\" returns successfully" Jul 7 00:14:58.661576 containerd[1507]: time="2025-07-07T00:14:58.661186113Z" level=info msg="RemovePodSandbox for \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\"" Jul 7 00:14:58.661576 containerd[1507]: time="2025-07-07T00:14:58.661208787Z" level=info msg="Forcibly stopping sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\"" Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.687 [WARNING][5525] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"41c99768-274d-463a-99f4-28ba08a6a5e5", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"1f260d92adc467acfc04be6e17a74e6ae224430f72f15063be20f678a6068ed0", Pod:"coredns-668d6bf9bc-tskgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff605079c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.688 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.688 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" iface="eth0" netns="" Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.688 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.688 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.707 [INFO][5532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.707 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.707 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.711 [WARNING][5532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.711 [INFO][5532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" HandleID="k8s-pod-network.77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--tskgm-eth0" Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.713 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.716307 containerd[1507]: 2025-07-07 00:14:58.714 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce" Jul 7 00:14:58.716836 containerd[1507]: time="2025-07-07T00:14:58.716333333Z" level=info msg="TearDown network for sandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\" successfully" Jul 7 00:14:58.719081 containerd[1507]: time="2025-07-07T00:14:58.719054097Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:14:58.719293 containerd[1507]: time="2025-07-07T00:14:58.719104565Z" level=info msg="RemovePodSandbox \"77be9c42289b2c7dedfe06d20967b7f28263a5fc6014f47c6aacd143c29a67ce\" returns successfully" Jul 7 00:14:58.719862 containerd[1507]: time="2025-07-07T00:14:58.719638169Z" level=info msg="StopPodSandbox for \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\"" Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.744 [WARNING][5546] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0", GenerateName:"calico-apiserver-6dc4856784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4afa15e9-a817-4cbf-8d61-a91ddd7b4568", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc4856784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5", Pod:"calico-apiserver-6dc4856784-rg2k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af1ee94e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.744 [INFO][5546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.744 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" iface="eth0" netns="" Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.744 [INFO][5546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.744 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.759 [INFO][5554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.760 [INFO][5554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.760 [INFO][5554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.765 [WARNING][5554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.765 [INFO][5554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.767 [INFO][5554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.771302 containerd[1507]: 2025-07-07 00:14:58.769 [INFO][5546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:58.771675 containerd[1507]: time="2025-07-07T00:14:58.771341402Z" level=info msg="TearDown network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\" successfully" Jul 7 00:14:58.771675 containerd[1507]: time="2025-07-07T00:14:58.771370970Z" level=info msg="StopPodSandbox for \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\" returns successfully" Jul 7 00:14:58.771822 containerd[1507]: time="2025-07-07T00:14:58.771786045Z" level=info msg="RemovePodSandbox for \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\"" Jul 7 00:14:58.771822 containerd[1507]: time="2025-07-07T00:14:58.771812296Z" level=info msg="Forcibly stopping sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\"" Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.819 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0", GenerateName:"calico-apiserver-6dc4856784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4afa15e9-a817-4cbf-8d61-a91ddd7b4568", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc4856784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"109a7885a7c0354838b0cb75e4aa144b86e396c2f2db321ba2bb7168e46648c5", Pod:"calico-apiserver-6dc4856784-rg2k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af1ee94e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.819 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.819 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" iface="eth0" netns="" Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.820 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.820 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.839 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.839 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.839 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.845 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.845 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" HandleID="k8s-pod-network.0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--rg2k9-eth0" Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.847 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.851912 containerd[1507]: 2025-07-07 00:14:58.848 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501" Jul 7 00:14:58.851912 containerd[1507]: time="2025-07-07T00:14:58.850993422Z" level=info msg="TearDown network for sandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\" successfully" Jul 7 00:14:58.859710 containerd[1507]: time="2025-07-07T00:14:58.859669774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:14:58.859760 containerd[1507]: time="2025-07-07T00:14:58.859725653Z" level=info msg="RemovePodSandbox \"0c6b1acb02a30fb0c138a1bed71a2bdf097b0661497390e225f693cca93ab501\" returns successfully" Jul 7 00:14:58.860125 containerd[1507]: time="2025-07-07T00:14:58.860091462Z" level=info msg="StopPodSandbox for \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\"" Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.887 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d", Pod:"csi-node-driver-5qbjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba1a4c7d85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.887 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.887 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" iface="eth0" netns="" Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.887 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.887 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.903 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.903 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.903 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.909 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.909 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.910 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.914273 containerd[1507]: 2025-07-07 00:14:58.912 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:58.914621 containerd[1507]: time="2025-07-07T00:14:58.914316905Z" level=info msg="TearDown network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\" successfully" Jul 7 00:14:58.914621 containerd[1507]: time="2025-07-07T00:14:58.914338887Z" level=info msg="StopPodSandbox for \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\" returns successfully" Jul 7 00:14:58.915282 containerd[1507]: time="2025-07-07T00:14:58.914722601Z" level=info msg="RemovePodSandbox for \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\"" Jul 7 00:14:58.915282 containerd[1507]: time="2025-07-07T00:14:58.914745275Z" level=info msg="Forcibly stopping sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\"" Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.941 [WARNING][5613] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d14ecfa-e7e4-4bf3-a0e0-11dc0ae5c7c6", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"ec9d50bdf7747d6f9cc4a26218af3bf7b03cb38ee17a348e1d679a6c2fdb760d", Pod:"csi-node-driver-5qbjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba1a4c7d85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.941 [INFO][5613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.941 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" iface="eth0" netns="" Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.941 [INFO][5613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.941 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.957 [INFO][5620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.957 [INFO][5620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.957 [INFO][5620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.962 [WARNING][5620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.962 [INFO][5620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" HandleID="k8s-pod-network.11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-csi--node--driver--5qbjk-eth0" Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.963 [INFO][5620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:58.967292 containerd[1507]: 2025-07-07 00:14:58.965 [INFO][5613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1" Jul 7 00:14:58.967292 containerd[1507]: time="2025-07-07T00:14:58.967103117Z" level=info msg="TearDown network for sandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\" successfully" Jul 7 00:14:58.969978 containerd[1507]: time="2025-07-07T00:14:58.969952160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:14:58.970043 containerd[1507]: time="2025-07-07T00:14:58.970003110Z" level=info msg="RemovePodSandbox \"11059170d535ce0c1a3bc05f9b1a11d6b9420542f9927211d6698a766a831ad1\" returns successfully" Jul 7 00:14:58.970389 containerd[1507]: time="2025-07-07T00:14:58.970367757Z" level=info msg="StopPodSandbox for \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\"" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:58.996 [WARNING][5636] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:58.996 [INFO][5636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:58.996 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" iface="eth0" netns="" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:58.996 [INFO][5636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:58.996 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:59.012 [INFO][5643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:59.012 [INFO][5643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:59.012 [INFO][5643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:59.017 [WARNING][5643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:59.017 [INFO][5643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:59.018 [INFO][5643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:59.021588 containerd[1507]: 2025-07-07 00:14:59.020 [INFO][5636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:59.022034 containerd[1507]: time="2025-07-07T00:14:59.021662373Z" level=info msg="TearDown network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\" successfully" Jul 7 00:14:59.022034 containerd[1507]: time="2025-07-07T00:14:59.021685507Z" level=info msg="StopPodSandbox for \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\" returns successfully" Jul 7 00:14:59.022204 containerd[1507]: time="2025-07-07T00:14:59.022170248Z" level=info msg="RemovePodSandbox for \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\"" Jul 7 00:14:59.022256 containerd[1507]: time="2025-07-07T00:14:59.022205596Z" level=info msg="Forcibly stopping sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\"" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.045 [WARNING][5658] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" WorkloadEndpoint="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.045 [INFO][5658] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.045 [INFO][5658] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" iface="eth0" netns="" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.045 [INFO][5658] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.045 [INFO][5658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.061 [INFO][5665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.061 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.061 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.065 [WARNING][5665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.065 [INFO][5665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" HandleID="k8s-pod-network.ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-whisker--7c6d6c95d9--49rxq-eth0" Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.067 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:59.070432 containerd[1507]: 2025-07-07 00:14:59.068 [INFO][5658] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4" Jul 7 00:14:59.070432 containerd[1507]: time="2025-07-07T00:14:59.070370378Z" level=info msg="TearDown network for sandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\" successfully" Jul 7 00:14:59.073302 containerd[1507]: time="2025-07-07T00:14:59.073261156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:14:59.073350 containerd[1507]: time="2025-07-07T00:14:59.073322986Z" level=info msg="RemovePodSandbox \"ccc0cab990f16b7e114c8cd7ef1906fe70e4d47305119bbfb2053917d5c716c4\" returns successfully" Jul 7 00:14:59.073764 containerd[1507]: time="2025-07-07T00:14:59.073738843Z" level=info msg="StopPodSandbox for \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\"" Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.099 [WARNING][5679] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0", GenerateName:"calico-apiserver-6dc4856784-", Namespace:"calico-apiserver", SelfLink:"", UID:"411d6daa-0c1d-48e1-908e-bd61e12d7879", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc4856784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739", Pod:"calico-apiserver-6dc4856784-l6qqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4a6c20c7b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.099 [INFO][5679] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.099 [INFO][5679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" iface="eth0" netns="" Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.099 [INFO][5679] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.099 [INFO][5679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.115 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.115 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.115 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.120 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.120 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.121 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:59.124884 containerd[1507]: 2025-07-07 00:14:59.123 [INFO][5679] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:59.124884 containerd[1507]: time="2025-07-07T00:14:59.124849641Z" level=info msg="TearDown network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\" successfully" Jul 7 00:14:59.124884 containerd[1507]: time="2025-07-07T00:14:59.124872605Z" level=info msg="StopPodSandbox for \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\" returns successfully" Jul 7 00:14:59.126716 containerd[1507]: time="2025-07-07T00:14:59.126302168Z" level=info msg="RemovePodSandbox for \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\"" Jul 7 00:14:59.126716 containerd[1507]: time="2025-07-07T00:14:59.126324932Z" level=info msg="Forcibly stopping sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\"" Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.150 [WARNING][5701] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0", GenerateName:"calico-apiserver-6dc4856784-", Namespace:"calico-apiserver", SelfLink:"", UID:"411d6daa-0c1d-48e1-908e-bd61e12d7879", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc4856784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"d419f2e3ccf81182f54b129bd1c951edb27d910b45454a7469533150af320739", Pod:"calico-apiserver-6dc4856784-l6qqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4a6c20c7b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.150 [INFO][5701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.150 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" iface="eth0" netns="" Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.150 [INFO][5701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.150 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.166 [INFO][5708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.166 [INFO][5708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.166 [INFO][5708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.171 [WARNING][5708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.171 [INFO][5708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" HandleID="k8s-pod-network.f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-calico--apiserver--6dc4856784--l6qqr-eth0" Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.172 [INFO][5708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:59.175845 containerd[1507]: 2025-07-07 00:14:59.173 [INFO][5701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f" Jul 7 00:14:59.175845 containerd[1507]: time="2025-07-07T00:14:59.175694511Z" level=info msg="TearDown network for sandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\" successfully" Jul 7 00:14:59.178690 containerd[1507]: time="2025-07-07T00:14:59.178638833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:14:59.178690 containerd[1507]: time="2025-07-07T00:14:59.178687758Z" level=info msg="RemovePodSandbox \"f3c4f1719c64cadfeccad2551cf8ebf46b13e8ff2b356d6cf687e8fc3152ad3f\" returns successfully" Jul 7 00:14:59.179156 containerd[1507]: time="2025-07-07T00:14:59.179118383Z" level=info msg="StopPodSandbox for \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\"" Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.205 [WARNING][5723] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"298451b0-5619-4a6f-8aad-35320d360358", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224", Pod:"coredns-668d6bf9bc-9qbxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali363398ec313", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.205 [INFO][5723] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.205 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" iface="eth0" netns="" Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.205 [INFO][5723] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.205 [INFO][5723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.227 [INFO][5730] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.227 [INFO][5730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.227 [INFO][5730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.232 [WARNING][5730] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.233 [INFO][5730] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.234 [INFO][5730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:59.238415 containerd[1507]: 2025-07-07 00:14:59.236 [INFO][5723] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:59.238776 containerd[1507]: time="2025-07-07T00:14:59.238433889Z" level=info msg="TearDown network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\" successfully" Jul 7 00:14:59.238776 containerd[1507]: time="2025-07-07T00:14:59.238452584Z" level=info msg="StopPodSandbox for \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\" returns successfully" Jul 7 00:14:59.238776 containerd[1507]: time="2025-07-07T00:14:59.238654336Z" level=info msg="RemovePodSandbox for \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\"" Jul 7 00:14:59.238776 containerd[1507]: time="2025-07-07T00:14:59.238673673Z" level=info msg="Forcibly stopping sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\"" Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.271 [WARNING][5744] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"298451b0-5619-4a6f-8aad-35320d360358", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-f-11cbdd5b1a", ContainerID:"bb201b15320b322893a416b3f7be23e43828159ed10e6de6937c56ebe16df224", Pod:"coredns-668d6bf9bc-9qbxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali363398ec313", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.271 [INFO][5744] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.271 [INFO][5744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" iface="eth0" netns="" Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.271 [INFO][5744] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.271 [INFO][5744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.290 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.291 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.291 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.306 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.306 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" HandleID="k8s-pod-network.b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Workload="ci--4081--3--4--f--11cbdd5b1a-k8s-coredns--668d6bf9bc--9qbxm-eth0" Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.309 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:14:59.327215 containerd[1507]: 2025-07-07 00:14:59.312 [INFO][5744] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7" Jul 7 00:14:59.327215 containerd[1507]: time="2025-07-07T00:14:59.327112589Z" level=info msg="TearDown network for sandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\" successfully" Jul 7 00:14:59.330867 containerd[1507]: time="2025-07-07T00:14:59.330832245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:14:59.330943 containerd[1507]: time="2025-07-07T00:14:59.330884086Z" level=info msg="RemovePodSandbox \"b5718bdbd4017801f2655b66822faf214ceaf61bad5612f5089fd62aee9400a7\" returns successfully" Jul 7 00:15:00.903819 kubelet[2712]: I0707 00:15:00.899389 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:10.431444 kubelet[2712]: I0707 00:15:10.431310 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:19.284811 systemd[1]: run-containerd-runc-k8s.io-c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb-runc.b5e4Mr.mount: Deactivated successfully. Jul 7 00:15:23.172817 systemd[1]: run-containerd-runc-k8s.io-cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78-runc.lwp9rj.mount: Deactivated successfully. Jul 7 00:15:32.047084 systemd[1]: run-containerd-runc-k8s.io-bddcb7dc119b01f133257637249e1b967923a2b7916e8ef7a5a45a27a4901068-runc.XFeWbg.mount: Deactivated successfully. Jul 7 00:16:09.749902 systemd[1]: run-containerd-runc-k8s.io-c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb-runc.DR6C0d.mount: Deactivated successfully. Jul 7 00:17:09.752363 systemd[1]: run-containerd-runc-k8s.io-c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb-runc.OkcuJ7.mount: Deactivated successfully. Jul 7 00:17:53.166657 systemd[1]: run-containerd-runc-k8s.io-cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78-runc.oimf5F.mount: Deactivated successfully. Jul 7 00:17:58.483452 systemd[1]: run-containerd-runc-k8s.io-cb000f2efbe0d00108c345827d979324d159b84299a0f86c99f881c1b5573b78-runc.EGkiwX.mount: Deactivated successfully. Jul 7 00:18:02.043725 systemd[1]: run-containerd-runc-k8s.io-bddcb7dc119b01f133257637249e1b967923a2b7916e8ef7a5a45a27a4901068-runc.UvKuBb.mount: Deactivated successfully. Jul 7 00:18:52.716493 systemd[1]: Started sshd@7-65.21.182.235:22-147.75.109.163:57034.service - OpenSSH per-connection server daemon (147.75.109.163:57034). Jul 7 00:18:53.756368 sshd[6489]: Accepted publickey for core from 147.75.109.163 port 57034 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:18:53.758918 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:53.766583 systemd-logind[1476]: New session 8 of user core. Jul 7 00:18:53.769312 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:18:54.901689 sshd[6489]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:54.905921 systemd[1]: sshd@7-65.21.182.235:22-147.75.109.163:57034.service: Deactivated successfully. Jul 7 00:18:54.909127 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:18:54.911199 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:18:54.912492 systemd-logind[1476]: Removed session 8. Jul 7 00:19:00.078819 systemd[1]: Started sshd@8-65.21.182.235:22-147.75.109.163:41278.service - OpenSSH per-connection server daemon (147.75.109.163:41278). Jul 7 00:19:01.133080 sshd[6546]: Accepted publickey for core from 147.75.109.163 port 41278 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:01.136030 sshd[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:01.140559 systemd-logind[1476]: New session 9 of user core. Jul 7 00:19:01.147292 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:19:02.041474 systemd[1]: run-containerd-runc-k8s.io-bddcb7dc119b01f133257637249e1b967923a2b7916e8ef7a5a45a27a4901068-runc.0ik4wF.mount: Deactivated successfully. Jul 7 00:19:02.079476 sshd[6546]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:02.086236 systemd[1]: sshd@8-65.21.182.235:22-147.75.109.163:41278.service: Deactivated successfully. Jul 7 00:19:02.088089 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:19:02.089509 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:19:02.092512 systemd-logind[1476]: Removed session 9. Jul 7 00:19:02.248602 systemd[1]: Started sshd@9-65.21.182.235:22-147.75.109.163:41282.service - OpenSSH per-connection server daemon (147.75.109.163:41282). Jul 7 00:19:03.260111 sshd[6581]: Accepted publickey for core from 147.75.109.163 port 41282 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:03.261849 sshd[6581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:03.267126 systemd-logind[1476]: New session 10 of user core. Jul 7 00:19:03.272275 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:19:04.057897 sshd[6581]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:04.064115 systemd[1]: sshd@9-65.21.182.235:22-147.75.109.163:41282.service: Deactivated successfully. Jul 7 00:19:04.064770 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:19:04.069713 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:19:04.071835 systemd-logind[1476]: Removed session 10. Jul 7 00:19:04.238554 systemd[1]: Started sshd@10-65.21.182.235:22-147.75.109.163:41294.service - OpenSSH per-connection server daemon (147.75.109.163:41294). Jul 7 00:19:05.247288 sshd[6594]: Accepted publickey for core from 147.75.109.163 port 41294 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:05.250347 sshd[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:05.258710 systemd-logind[1476]: New session 11 of user core. Jul 7 00:19:05.263565 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:19:05.998501 sshd[6594]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:06.001752 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:19:06.002427 systemd[1]: sshd@10-65.21.182.235:22-147.75.109.163:41294.service: Deactivated successfully. Jul 7 00:19:06.004337 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:19:06.005887 systemd-logind[1476]: Removed session 11. Jul 7 00:19:06.486802 update_engine[1477]: I20250707 00:19:06.486542 1477 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 00:19:06.486802 update_engine[1477]: I20250707 00:19:06.486636 1477 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 00:19:06.489250 update_engine[1477]: I20250707 00:19:06.489194 1477 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 00:19:06.490728 update_engine[1477]: I20250707 00:19:06.490654 1477 omaha_request_params.cc:62] Current group set to lts Jul 7 00:19:06.490915 update_engine[1477]: I20250707 00:19:06.490871 1477 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 00:19:06.492190 update_engine[1477]: I20250707 00:19:06.491300 1477 update_attempter.cc:643] Scheduling an action processor start. Jul 7 00:19:06.492190 update_engine[1477]: I20250707 00:19:06.491356 1477 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:19:06.492190 update_engine[1477]: I20250707 00:19:06.491411 1477 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 00:19:06.492190 update_engine[1477]: I20250707 00:19:06.491497 1477 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:19:06.492190 update_engine[1477]: I20250707 00:19:06.491511 1477 omaha_request_action.cc:272] Request: Jul 7 00:19:06.492190 update_engine[1477]: Jul 7 00:19:06.492190 update_engine[1477]: Jul 7 00:19:06.492190 update_engine[1477]: Jul 7 00:19:06.492190 update_engine[1477]: Jul 7 00:19:06.492190 update_engine[1477]: Jul 7 00:19:06.492190 update_engine[1477]: Jul 7 00:19:06.492190 update_engine[1477]: Jul 7 00:19:06.492190 update_engine[1477]: Jul 7 00:19:06.492190 update_engine[1477]: I20250707 00:19:06.491520 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:19:06.509044 update_engine[1477]: I20250707 00:19:06.508616 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:19:06.509044 update_engine[1477]: I20250707 00:19:06.509001 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:19:06.511086 update_engine[1477]: E20250707 00:19:06.510182 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:19:06.511086 update_engine[1477]: I20250707 00:19:06.510255 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 00:19:06.512357 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 00:19:09.787933 systemd[1]: run-containerd-runc-k8s.io-c72930c2688fe81588530ccb779b27f999f0b8d7fc91d878f7b7ef521e4d54fb-runc.x5N0Zl.mount: Deactivated successfully. Jul 7 00:19:11.172945 systemd[1]: Started sshd@11-65.21.182.235:22-147.75.109.163:49622.service - OpenSSH per-connection server daemon (147.75.109.163:49622). Jul 7 00:19:12.217418 sshd[6634]: Accepted publickey for core from 147.75.109.163 port 49622 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:12.219510 sshd[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:12.224695 systemd-logind[1476]: New session 12 of user core. Jul 7 00:19:12.229324 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:19:13.054136 sshd[6634]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:13.057607 systemd[1]: sshd@11-65.21.182.235:22-147.75.109.163:49622.service: Deactivated successfully. Jul 7 00:19:13.059864 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:19:13.061715 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:19:13.062993 systemd-logind[1476]: Removed session 12. Jul 7 00:19:13.229036 systemd[1]: Started sshd@12-65.21.182.235:22-147.75.109.163:49624.service - OpenSSH per-connection server daemon (147.75.109.163:49624). Jul 7 00:19:14.271565 sshd[6647]: Accepted publickey for core from 147.75.109.163 port 49624 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:14.273116 sshd[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:14.277694 systemd-logind[1476]: New session 13 of user core. Jul 7 00:19:14.282278 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:19:15.205263 sshd[6647]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:15.210326 systemd[1]: sshd@12-65.21.182.235:22-147.75.109.163:49624.service: Deactivated successfully. Jul 7 00:19:15.212010 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:19:15.213360 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:19:15.215025 systemd-logind[1476]: Removed session 13. Jul 7 00:19:15.382858 systemd[1]: Started sshd@13-65.21.182.235:22-147.75.109.163:49638.service - OpenSSH per-connection server daemon (147.75.109.163:49638). Jul 7 00:19:16.439533 update_engine[1477]: I20250707 00:19:16.439337 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:19:16.441618 update_engine[1477]: I20250707 00:19:16.439988 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:19:16.445697 sshd[6679]: Accepted publickey for core from 147.75.109.163 port 49638 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:16.449322 update_engine[1477]: I20250707 00:19:16.445681 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:19:16.449322 update_engine[1477]: E20250707 00:19:16.448957 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:19:16.449322 update_engine[1477]: I20250707 00:19:16.449054 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 00:19:16.451387 sshd[6679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:16.458134 systemd-logind[1476]: New session 14 of user core. Jul 7 00:19:16.466332 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:19:18.248510 sshd[6679]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:18.255202 systemd[1]: sshd@13-65.21.182.235:22-147.75.109.163:49638.service: Deactivated successfully. Jul 7 00:19:18.257178 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:19:18.258230 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:19:18.259668 systemd-logind[1476]: Removed session 14. Jul 7 00:19:18.416984 systemd[1]: Started sshd@14-65.21.182.235:22-147.75.109.163:58058.service - OpenSSH per-connection server daemon (147.75.109.163:58058). Jul 7 00:19:19.472881 sshd[6697]: Accepted publickey for core from 147.75.109.163 port 58058 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:19.474321 sshd[6697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:19.479763 systemd-logind[1476]: New session 15 of user core. Jul 7 00:19:19.485289 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:19:20.717674 sshd[6697]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:20.722434 systemd[1]: sshd@14-65.21.182.235:22-147.75.109.163:58058.service: Deactivated successfully. Jul 7 00:19:20.725907 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:19:20.728208 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:19:20.729914 systemd-logind[1476]: Removed session 15. Jul 7 00:19:20.906295 systemd[1]: Started sshd@15-65.21.182.235:22-147.75.109.163:58072.service - OpenSSH per-connection server daemon (147.75.109.163:58072). Jul 7 00:19:21.944367 sshd[6728]: Accepted publickey for core from 147.75.109.163 port 58072 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:21.946728 sshd[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:21.954251 systemd-logind[1476]: New session 16 of user core. Jul 7 00:19:21.959402 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:19:22.719700 sshd[6728]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:22.725928 systemd[1]: sshd@15-65.21.182.235:22-147.75.109.163:58072.service: Deactivated successfully. Jul 7 00:19:22.729325 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:19:22.732512 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:19:22.736396 systemd-logind[1476]: Removed session 16. Jul 7 00:19:26.442312 update_engine[1477]: I20250707 00:19:26.442185 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:19:26.442692 update_engine[1477]: I20250707 00:19:26.442445 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:19:26.442692 update_engine[1477]: I20250707 00:19:26.442667 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:19:26.443415 update_engine[1477]: E20250707 00:19:26.443296 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:19:26.443415 update_engine[1477]: I20250707 00:19:26.443352 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 00:19:27.909270 systemd[1]: Started sshd@16-65.21.182.235:22-147.75.109.163:57276.service - OpenSSH per-connection server daemon (147.75.109.163:57276). Jul 7 00:19:28.971507 sshd[6764]: Accepted publickey for core from 147.75.109.163 port 57276 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:28.972010 sshd[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:28.976704 systemd-logind[1476]: New session 17 of user core. Jul 7 00:19:28.984329 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:19:30.004905 sshd[6764]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:30.011691 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:19:30.012463 systemd[1]: sshd@16-65.21.182.235:22-147.75.109.163:57276.service: Deactivated successfully. Jul 7 00:19:30.014407 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:19:30.015249 systemd-logind[1476]: Removed session 17. Jul 7 00:19:35.192386 systemd[1]: Started sshd@17-65.21.182.235:22-147.75.109.163:57290.service - OpenSSH per-connection server daemon (147.75.109.163:57290). Jul 7 00:19:36.238951 sshd[6807]: Accepted publickey for core from 147.75.109.163 port 57290 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:19:36.241541 sshd[6807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:36.246841 systemd-logind[1476]: New session 18 of user core. Jul 7 00:19:36.253315 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:19:36.439029 update_engine[1477]: I20250707 00:19:36.438931 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:19:36.439396 update_engine[1477]: I20250707 00:19:36.439297 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:19:36.439613 update_engine[1477]: I20250707 00:19:36.439581 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:19:36.440236 update_engine[1477]: E20250707 00:19:36.440203 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:19:36.440292 update_engine[1477]: I20250707 00:19:36.440253 1477 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:19:36.442236 update_engine[1477]: I20250707 00:19:36.440263 1477 omaha_request_action.cc:617] Omaha request response: Jul 7 00:19:36.442610 update_engine[1477]: E20250707 00:19:36.442574 1477 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449210 1477 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449229 1477 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449236 1477 update_attempter.cc:306] Processing Done. Jul 7 00:19:36.450995 update_engine[1477]: E20250707 00:19:36.449255 1477 update_attempter.cc:619] Update failed. Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449260 1477 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449265 1477 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449270 1477 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449344 1477 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449365 1477 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449371 1477 omaha_request_action.cc:272] Request: Jul 7 00:19:36.450995 update_engine[1477]: Jul 7 00:19:36.450995 update_engine[1477]: Jul 7 00:19:36.450995 update_engine[1477]: Jul 7 00:19:36.450995 update_engine[1477]: Jul 7 00:19:36.450995 update_engine[1477]: Jul 7 00:19:36.450995 update_engine[1477]: Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449376 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449493 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:19:36.450995 update_engine[1477]: I20250707 00:19:36.449645 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:19:36.455672 update_engine[1477]: E20250707 00:19:36.451474 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:19:36.455672 update_engine[1477]: I20250707 00:19:36.451525 1477 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:19:36.455672 update_engine[1477]: I20250707 00:19:36.451534 1477 omaha_request_action.cc:617] Omaha request response: Jul 7 00:19:36.455672 update_engine[1477]: I20250707 00:19:36.451540 1477 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:19:36.455672 update_engine[1477]: I20250707 00:19:36.451543 1477 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:19:36.455672 update_engine[1477]: I20250707 00:19:36.451548 1477 update_attempter.cc:306] Processing Done. Jul 7 00:19:36.455672 update_engine[1477]: I20250707 00:19:36.451553 1477 update_attempter.cc:310] Error event sent. Jul 7 00:19:36.455672 update_engine[1477]: I20250707 00:19:36.451566 1477 update_check_scheduler.cc:74] Next update check in 47m31s Jul 7 00:19:36.460747 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 00:19:36.460747 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 00:19:37.212423 sshd[6807]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:37.214885 systemd[1]: sshd@17-65.21.182.235:22-147.75.109.163:57290.service: Deactivated successfully. Jul 7 00:19:37.216224 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:19:37.217561 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:19:37.218534 systemd-logind[1476]: Removed session 18. Jul 7 00:19:56.823518 systemd[1]: cri-containerd-04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343.scope: Deactivated successfully. Jul 7 00:19:56.824477 systemd[1]: cri-containerd-04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343.scope: Consumed 2.923s CPU time, 18.7M memory peak, 0B memory swap peak. Jul 7 00:19:56.871020 kubelet[2712]: E0707 00:19:56.864394 2712 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52234->10.0.0.2:2379: read: connection timed out" Jul 7 00:19:56.987308 systemd[1]: cri-containerd-0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860.scope: Deactivated successfully. Jul 7 00:19:56.987485 systemd[1]: cri-containerd-0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860.scope: Consumed 5.185s CPU time, 21.7M memory peak, 0B memory swap peak. Jul 7 00:19:57.033710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343-rootfs.mount: Deactivated successfully. Jul 7 00:19:57.038844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860-rootfs.mount: Deactivated successfully. Jul 7 00:19:57.103471 containerd[1507]: time="2025-07-07T00:19:57.062162117Z" level=info msg="shim disconnected" id=0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860 namespace=k8s.io Jul 7 00:19:57.103471 containerd[1507]: time="2025-07-07T00:19:57.103346208Z" level=warning msg="cleaning up after shim disconnected" id=0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860 namespace=k8s.io Jul 7 00:19:57.103471 containerd[1507]: time="2025-07-07T00:19:57.103361466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:19:57.105691 containerd[1507]: time="2025-07-07T00:19:57.062903349Z" level=info msg="shim disconnected" id=04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343 namespace=k8s.io Jul 7 00:19:57.105691 containerd[1507]: time="2025-07-07T00:19:57.104013507Z" level=warning msg="cleaning up after shim disconnected" id=04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343 namespace=k8s.io Jul 7 00:19:57.105691 containerd[1507]: time="2025-07-07T00:19:57.104021622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:19:57.453192 kubelet[2712]: I0707 00:19:57.453048 2712 scope.go:117] "RemoveContainer" containerID="0f28d8019a311914bedc047989794d17521c09162fa0be6a56573c655eb86860" Jul 7 00:19:57.466258 kubelet[2712]: I0707 00:19:57.466079 2712 scope.go:117] "RemoveContainer" containerID="04f5e6477f39559542ed5a1f3e0ec00e8ef671017f566d48049e2713b0d3e343" Jul 7 00:19:57.513305 containerd[1507]: time="2025-07-07T00:19:57.513258978Z" level=info msg="CreateContainer within sandbox \"9a6c69518be24bba30563b918c1d564e0e1698c445010b5e10bac9818586af6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:19:57.514933 containerd[1507]: time="2025-07-07T00:19:57.514695431Z" level=info msg="CreateContainer within sandbox \"553c03aa015ce238d520f4e499c02cbc002ad6897d415c71cf6931b82d036739\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:19:57.607458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578662319.mount: Deactivated successfully. Jul 7 00:19:57.627291 containerd[1507]: time="2025-07-07T00:19:57.627109506Z" level=info msg="CreateContainer within sandbox \"553c03aa015ce238d520f4e499c02cbc002ad6897d415c71cf6931b82d036739\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fa5b3869e75de87e10307c7d00fd657aaeabaf00b70ba04ea324e8b1d02c6a78\"" Jul 7 00:19:57.628031 containerd[1507]: time="2025-07-07T00:19:57.627846922Z" level=info msg="StartContainer for \"fa5b3869e75de87e10307c7d00fd657aaeabaf00b70ba04ea324e8b1d02c6a78\"" Jul 7 00:19:57.628031 containerd[1507]: time="2025-07-07T00:19:57.627912701Z" level=info msg="CreateContainer within sandbox \"9a6c69518be24bba30563b918c1d564e0e1698c445010b5e10bac9818586af6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"daef01cbb358fac542d5eb35d0e969eb59dca785ef76cb18787f6c61d7dc081f\"" Jul 7 00:19:57.629353 containerd[1507]: time="2025-07-07T00:19:57.628888459Z" level=info msg="StartContainer for \"daef01cbb358fac542d5eb35d0e969eb59dca785ef76cb18787f6c61d7dc081f\"" Jul 7 00:19:57.677312 systemd[1]: Started cri-containerd-fa5b3869e75de87e10307c7d00fd657aaeabaf00b70ba04ea324e8b1d02c6a78.scope - libcontainer container fa5b3869e75de87e10307c7d00fd657aaeabaf00b70ba04ea324e8b1d02c6a78. Jul 7 00:19:57.681988 systemd[1]: Started cri-containerd-daef01cbb358fac542d5eb35d0e969eb59dca785ef76cb18787f6c61d7dc081f.scope - libcontainer container daef01cbb358fac542d5eb35d0e969eb59dca785ef76cb18787f6c61d7dc081f. Jul 7 00:19:57.735601 systemd[1]: cri-containerd-99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10.scope: Deactivated successfully. Jul 7 00:19:57.736836 containerd[1507]: time="2025-07-07T00:19:57.736806449Z" level=info msg="StartContainer for \"daef01cbb358fac542d5eb35d0e969eb59dca785ef76cb18787f6c61d7dc081f\" returns successfully" Jul 7 00:19:57.737009 systemd[1]: cri-containerd-99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10.scope: Consumed 14.123s CPU time. Jul 7 00:19:57.748770 containerd[1507]: time="2025-07-07T00:19:57.748594455Z" level=info msg="StartContainer for \"fa5b3869e75de87e10307c7d00fd657aaeabaf00b70ba04ea324e8b1d02c6a78\" returns successfully" Jul 7 00:19:57.775232 containerd[1507]: time="2025-07-07T00:19:57.775176312Z" level=info msg="shim disconnected" id=99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10 namespace=k8s.io Jul 7 00:19:57.775444 containerd[1507]: time="2025-07-07T00:19:57.775256788Z" level=warning msg="cleaning up after shim disconnected" id=99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10 namespace=k8s.io Jul 7 00:19:57.775444 containerd[1507]: time="2025-07-07T00:19:57.775267507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:19:58.040170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643001354.mount: Deactivated successfully. Jul 7 00:19:58.040521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10-rootfs.mount: Deactivated successfully. Jul 7 00:19:58.480134 kubelet[2712]: I0707 00:19:58.479941 2712 scope.go:117] "RemoveContainer" containerID="99a438ec58114eaf0ff02ba0899f0edf1b723c1693b794625f40bc10b6749e10" Jul 7 00:19:58.529995 containerd[1507]: time="2025-07-07T00:19:58.529398483Z" level=info msg="CreateContainer within sandbox \"72034c9e1e75c55c5d01556db98e35295b52c9f2d438bce5dba4aa892b54ff34\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 00:19:58.552671 containerd[1507]: time="2025-07-07T00:19:58.552637206Z" level=info msg="CreateContainer within sandbox \"72034c9e1e75c55c5d01556db98e35295b52c9f2d438bce5dba4aa892b54ff34\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"038e706acbeeba457c4cfbabcdad0d6e37c56cd095255cdc1193864987662d6b\"" Jul 7 00:19:58.555669 containerd[1507]: time="2025-07-07T00:19:58.555635120Z" level=info msg="StartContainer for \"038e706acbeeba457c4cfbabcdad0d6e37c56cd095255cdc1193864987662d6b\"" Jul 7 00:19:58.557095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733970982.mount: Deactivated successfully. Jul 7 00:19:58.614395 systemd[1]: Started cri-containerd-038e706acbeeba457c4cfbabcdad0d6e37c56cd095255cdc1193864987662d6b.scope - libcontainer container 038e706acbeeba457c4cfbabcdad0d6e37c56cd095255cdc1193864987662d6b. Jul 7 00:19:58.646308 containerd[1507]: time="2025-07-07T00:19:58.646239482Z" level=info msg="StartContainer for \"038e706acbeeba457c4cfbabcdad0d6e37c56cd095255cdc1193864987662d6b\" returns successfully" Jul 7 00:20:02.210798 kubelet[2712]: E0707 00:20:02.184531 2712 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52054->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-f-11cbdd5b1a.184fd01a5bb40e5e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-f-11cbdd5b1a,UID:ac5a9b223cda65d790a781a52e7c5776,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-f-11cbdd5b1a,},FirstTimestamp:2025-07-07 00:19:51.696666206 +0000 UTC m=+353.919979294,LastTimestamp:2025-07-07 00:19:51.696666206 +0000 UTC m=+353.919979294,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-f-11cbdd5b1a,}"