May 13 04:46:32.033887 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 04:46:32.033911 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 04:46:32.033921 kernel: BIOS-provided physical RAM map: May 13 04:46:32.033928 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 04:46:32.033935 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 04:46:32.033944 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 04:46:32.033952 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 04:46:32.033960 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 04:46:32.033967 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 04:46:32.033974 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 04:46:32.033982 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 04:46:32.033989 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 04:46:32.033996 kernel: NX (Execute Disable) protection: active May 13 04:46:32.034003 kernel: APIC: Static calls initialized May 13 04:46:32.034014 kernel: SMBIOS 3.0.0 present. May 13 04:46:32.034022 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 04:46:32.034029 kernel: Hypervisor detected: KVM May 13 04:46:32.034037 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 04:46:32.034044 kernel: kvm-clock: using sched offset of 3636478543 cycles May 13 04:46:32.034054 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 04:46:32.034062 kernel: tsc: Detected 1996.249 MHz processor May 13 04:46:32.034070 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 04:46:32.034078 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 04:46:32.034086 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 04:46:32.034094 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 04:46:32.034102 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 04:46:32.034110 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 04:46:32.034117 kernel: ACPI: Early table checksum verification disabled May 13 04:46:32.034127 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 04:46:32.034152 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:46:32.034160 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:46:32.034168 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:46:32.034175 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 04:46:32.034183 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:46:32.034191 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:46:32.034199 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 04:46:32.034206 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 04:46:32.034216 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 04:46:32.034224 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 04:46:32.034232 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 04:46:32.034243 kernel: No NUMA configuration found May 13 04:46:32.034251 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 04:46:32.034259 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 13 04:46:32.034269 kernel: Zone ranges: May 13 04:46:32.034277 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 04:46:32.034285 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 04:46:32.034293 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 04:46:32.034301 kernel: Movable zone start for each node May 13 04:46:32.034309 kernel: Early memory node ranges May 13 04:46:32.034317 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 04:46:32.034325 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 04:46:32.034333 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 04:46:32.034343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 04:46:32.034351 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 04:46:32.034359 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 04:46:32.034367 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 04:46:32.034376 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 04:46:32.034384 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 04:46:32.034392 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 04:46:32.034400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 04:46:32.034408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 04:46:32.034418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 04:46:32.034426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 04:46:32.034434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 04:46:32.034443 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 04:46:32.034451 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 04:46:32.034459 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 04:46:32.034467 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 04:46:32.034475 kernel: Booting paravirtualized kernel on KVM May 13 04:46:32.034483 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 04:46:32.034493 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 04:46:32.034502 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 04:46:32.034510 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 04:46:32.034518 kernel: pcpu-alloc: [0] 0 1 May 13 04:46:32.034526 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 04:46:32.034535 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 04:46:32.034544 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 04:46:32.034554 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 04:46:32.034562 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 04:46:32.034570 kernel: Fallback order for Node 0: 0 May 13 04:46:32.034579 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 04:46:32.034587 kernel: Policy zone: Normal May 13 04:46:32.034595 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 04:46:32.034603 kernel: software IO TLB: area num 2. May 13 04:46:32.034611 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 227308K reserved, 0K cma-reserved) May 13 04:46:32.034620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 04:46:32.034629 kernel: ftrace: allocating 37944 entries in 149 pages May 13 04:46:32.034638 kernel: ftrace: allocated 149 pages with 4 groups May 13 04:46:32.034646 kernel: Dynamic Preempt: voluntary May 13 04:46:32.034654 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 04:46:32.034663 kernel: rcu: RCU event tracing is enabled. May 13 04:46:32.034671 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 04:46:32.034679 kernel: Trampoline variant of Tasks RCU enabled. May 13 04:46:32.034688 kernel: Rude variant of Tasks RCU enabled. May 13 04:46:32.034696 kernel: Tracing variant of Tasks RCU enabled. May 13 04:46:32.034704 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 04:46:32.034727 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 04:46:32.034736 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 04:46:32.034755 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 04:46:32.034763 kernel: Console: colour VGA+ 80x25 May 13 04:46:32.034772 kernel: printk: console [tty0] enabled May 13 04:46:32.034780 kernel: printk: console [ttyS0] enabled May 13 04:46:32.034788 kernel: ACPI: Core revision 20230628 May 13 04:46:32.034796 kernel: APIC: Switch to symmetric I/O mode setup May 13 04:46:32.034804 kernel: x2apic enabled May 13 04:46:32.034815 kernel: APIC: Switched APIC routing to: physical x2apic May 13 04:46:32.034823 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 04:46:32.034831 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 04:46:32.034839 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 04:46:32.034848 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 04:46:32.034856 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 04:46:32.034864 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 04:46:32.034873 kernel: Spectre V2 : Mitigation: Retpolines May 13 04:46:32.034881 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 04:46:32.034891 kernel: Speculative Store Bypass: Vulnerable May 13 04:46:32.034899 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 04:46:32.034907 kernel: Freeing SMP alternatives memory: 32K May 13 04:46:32.034915 kernel: pid_max: default: 32768 minimum: 301 May 13 04:46:32.034929 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 04:46:32.034939 kernel: landlock: Up and running. May 13 04:46:32.034947 kernel: SELinux: Initializing. May 13 04:46:32.034956 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 04:46:32.034965 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 04:46:32.034973 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 04:46:32.034982 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 04:46:32.034993 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 04:46:32.035002 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 04:46:32.035010 kernel: Performance Events: AMD PMU driver. May 13 04:46:32.035019 kernel: ... version: 0 May 13 04:46:32.035027 kernel: ... bit width: 48 May 13 04:46:32.035037 kernel: ... generic registers: 4 May 13 04:46:32.035046 kernel: ... value mask: 0000ffffffffffff May 13 04:46:32.035054 kernel: ... max period: 00007fffffffffff May 13 04:46:32.035063 kernel: ... fixed-purpose events: 0 May 13 04:46:32.035071 kernel: ... event mask: 000000000000000f May 13 04:46:32.035080 kernel: signal: max sigframe size: 1440 May 13 04:46:32.035088 kernel: rcu: Hierarchical SRCU implementation. May 13 04:46:32.035097 kernel: rcu: Max phase no-delay instances is 400. May 13 04:46:32.035106 kernel: smp: Bringing up secondary CPUs ... May 13 04:46:32.035116 kernel: smpboot: x86: Booting SMP configuration: May 13 04:46:32.035124 kernel: .... node #0, CPUs: #1 May 13 04:46:32.035133 kernel: smp: Brought up 1 node, 2 CPUs May 13 04:46:32.035141 kernel: smpboot: Max logical packages: 2 May 13 04:46:32.035150 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 04:46:32.035158 kernel: devtmpfs: initialized May 13 04:46:32.035167 kernel: x86/mm: Memory block size: 128MB May 13 04:46:32.035176 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 04:46:32.035185 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 04:46:32.035193 kernel: pinctrl core: initialized pinctrl subsystem May 13 04:46:32.035204 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 04:46:32.035212 kernel: audit: initializing netlink subsys (disabled) May 13 04:46:32.035221 kernel: audit: type=2000 audit(1747111591.026:1): state=initialized audit_enabled=0 res=1 May 13 04:46:32.035230 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 04:46:32.035239 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 04:46:32.035247 kernel: cpuidle: using governor menu May 13 04:46:32.035256 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 04:46:32.035265 kernel: dca service started, version 1.12.1 May 13 04:46:32.035273 kernel: PCI: Using configuration type 1 for base access May 13 04:46:32.035284 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 04:46:32.035292 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 04:46:32.035301 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 04:46:32.035310 kernel: ACPI: Added _OSI(Module Device) May 13 04:46:32.035318 kernel: ACPI: Added _OSI(Processor Device) May 13 04:46:32.035327 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 04:46:32.035336 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 04:46:32.035344 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 04:46:32.035353 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 04:46:32.035363 kernel: ACPI: Interpreter enabled May 13 04:46:32.035372 kernel: ACPI: PM: (supports S0 S3 S5) May 13 04:46:32.035380 kernel: ACPI: Using IOAPIC for interrupt routing May 13 04:46:32.035389 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 04:46:32.035398 kernel: PCI: Using E820 reservations for host bridge windows May 13 04:46:32.035406 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 04:46:32.035415 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 04:46:32.035549 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 04:46:32.035652 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 04:46:32.035832 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 04:46:32.035848 kernel: acpiphp: Slot [3] registered May 13 04:46:32.035857 kernel: acpiphp: Slot [4] registered May 13 04:46:32.035866 kernel: acpiphp: Slot [5] registered May 13 04:46:32.035874 kernel: acpiphp: Slot [6] registered May 13 04:46:32.035883 kernel: acpiphp: Slot [7] registered May 13 04:46:32.035892 kernel: acpiphp: Slot [8] registered May 13 04:46:32.035904 kernel: acpiphp: Slot [9] registered May 13 04:46:32.035913 kernel: acpiphp: Slot [10] registered May 13 04:46:32.035921 kernel: acpiphp: Slot [11] registered May 13 04:46:32.035930 kernel: acpiphp: Slot [12] registered May 13 04:46:32.035938 kernel: acpiphp: Slot [13] registered May 13 04:46:32.035946 kernel: acpiphp: Slot [14] registered May 13 04:46:32.035955 kernel: acpiphp: Slot [15] registered May 13 04:46:32.035963 kernel: acpiphp: Slot [16] registered May 13 04:46:32.035972 kernel: acpiphp: Slot [17] registered May 13 04:46:32.035982 kernel: acpiphp: Slot [18] registered May 13 04:46:32.035990 kernel: acpiphp: Slot [19] registered May 13 04:46:32.035999 kernel: acpiphp: Slot [20] registered May 13 04:46:32.036007 kernel: acpiphp: Slot [21] registered May 13 04:46:32.036016 kernel: acpiphp: Slot [22] registered May 13 04:46:32.036024 kernel: acpiphp: Slot [23] registered May 13 04:46:32.036033 kernel: acpiphp: Slot [24] registered May 13 04:46:32.036041 kernel: acpiphp: Slot [25] registered May 13 04:46:32.036050 kernel: acpiphp: Slot [26] registered May 13 04:46:32.036058 kernel: acpiphp: Slot [27] registered May 13 04:46:32.036068 kernel: acpiphp: Slot [28] registered May 13 04:46:32.036077 kernel: acpiphp: Slot [29] registered May 13 04:46:32.036085 kernel: acpiphp: Slot [30] registered May 13 04:46:32.036094 kernel: acpiphp: Slot [31] registered May 13 04:46:32.036102 kernel: PCI host bridge to bus 0000:00 May 13 04:46:32.036194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 04:46:32.036276 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 04:46:32.036354 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 04:46:32.036438 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 04:46:32.036517 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 04:46:32.036595 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 04:46:32.036698 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 04:46:32.036832 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 04:46:32.036931 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 04:46:32.037028 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 04:46:32.037119 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 04:46:32.037208 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 04:46:32.037298 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 04:46:32.037388 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 04:46:32.037485 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 04:46:32.037576 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 04:46:32.037671 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 04:46:32.037805 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 04:46:32.037900 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 04:46:32.037990 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 04:46:32.038081 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 04:46:32.038202 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 04:46:32.038295 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 04:46:32.038403 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 04:46:32.038495 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 04:46:32.038586 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 04:46:32.038675 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 04:46:32.038818 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 04:46:32.038917 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 04:46:32.039012 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 04:46:32.039100 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 04:46:32.039189 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 04:46:32.039284 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 04:46:32.039373 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 04:46:32.039463 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 04:46:32.039560 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 04:46:32.039657 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 04:46:32.040803 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 04:46:32.040900 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 04:46:32.040914 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 04:46:32.040923 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 04:46:32.040932 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 04:46:32.040941 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 04:46:32.040949 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 04:46:32.040958 kernel: iommu: Default domain type: Translated May 13 04:46:32.040971 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 04:46:32.040980 kernel: PCI: Using ACPI for IRQ routing May 13 04:46:32.040988 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 04:46:32.040997 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 04:46:32.041006 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 04:46:32.041094 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 04:46:32.041183 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 04:46:32.041271 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 04:46:32.041288 kernel: vgaarb: loaded May 13 04:46:32.041297 kernel: clocksource: Switched to clocksource kvm-clock May 13 04:46:32.041306 kernel: VFS: Disk quotas dquot_6.6.0 May 13 04:46:32.041314 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 04:46:32.041323 kernel: pnp: PnP ACPI init May 13 04:46:32.041414 kernel: pnp 00:03: [dma 2] May 13 04:46:32.041428 kernel: pnp: PnP ACPI: found 5 devices May 13 04:46:32.041437 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 04:46:32.041446 kernel: NET: Registered PF_INET protocol family May 13 04:46:32.041458 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 04:46:32.041467 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 04:46:32.041476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 04:46:32.041485 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 04:46:32.041493 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 04:46:32.041502 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 04:46:32.041511 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 04:46:32.041520 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 04:46:32.041529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 04:46:32.041539 kernel: NET: Registered PF_XDP protocol family May 13 04:46:32.041621 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 04:46:32.041700 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 04:46:32.041800 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 04:46:32.041878 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 04:46:32.041956 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 04:46:32.042046 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 04:46:32.042186 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 04:46:32.042205 kernel: PCI: CLS 0 bytes, default 64 May 13 04:46:32.042214 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 04:46:32.042223 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 04:46:32.042232 kernel: Initialise system trusted keyrings May 13 04:46:32.042241 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 04:46:32.042249 kernel: Key type asymmetric registered May 13 04:46:32.042258 kernel: Asymmetric key parser 'x509' registered May 13 04:46:32.042267 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 04:46:32.042277 kernel: io scheduler mq-deadline registered May 13 04:46:32.042286 kernel: io scheduler kyber registered May 13 04:46:32.042295 kernel: io scheduler bfq registered May 13 04:46:32.042304 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 04:46:32.042313 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 04:46:32.042322 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 04:46:32.042331 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 04:46:32.042340 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 04:46:32.042348 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 04:46:32.042359 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 04:46:32.042369 kernel: random: crng init done May 13 04:46:32.042379 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 04:46:32.042389 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 04:46:32.042398 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 04:46:32.042497 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 04:46:32.042513 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 04:46:32.042598 kernel: rtc_cmos 00:04: registered as rtc0 May 13 04:46:32.042691 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T04:46:31 UTC (1747111591) May 13 04:46:32.044854 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 04:46:32.044877 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 04:46:32.044887 kernel: NET: Registered PF_INET6 protocol family May 13 04:46:32.044896 kernel: Segment Routing with IPv6 May 13 04:46:32.044905 kernel: In-situ OAM (IOAM) with IPv6 May 13 04:46:32.044914 kernel: NET: Registered PF_PACKET protocol family May 13 04:46:32.044923 kernel: Key type dns_resolver registered May 13 04:46:32.044931 kernel: IPI shorthand broadcast: enabled May 13 04:46:32.044945 kernel: sched_clock: Marking stable (987006566, 170044702)->(1183960347, -26909079) May 13 04:46:32.044953 kernel: registered taskstats version 1 May 13 04:46:32.044962 kernel: Loading compiled-in X.509 certificates May 13 04:46:32.044971 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 04:46:32.044980 kernel: Key type .fscrypt registered May 13 04:46:32.044989 kernel: Key type fscrypt-provisioning registered May 13 04:46:32.044997 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 04:46:32.045006 kernel: ima: Allocated hash algorithm: sha1 May 13 04:46:32.045015 kernel: ima: No architecture policies found May 13 04:46:32.045025 kernel: clk: Disabling unused clocks May 13 04:46:32.045034 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 04:46:32.045043 kernel: Write protecting the kernel read-only data: 36864k May 13 04:46:32.045052 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 04:46:32.045060 kernel: Run /init as init process May 13 04:46:32.045069 kernel: with arguments: May 13 04:46:32.045078 kernel: /init May 13 04:46:32.045086 kernel: with environment: May 13 04:46:32.045094 kernel: HOME=/ May 13 04:46:32.045104 kernel: TERM=linux May 13 04:46:32.045113 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 04:46:32.045124 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 04:46:32.045136 systemd[1]: Detected virtualization kvm. May 13 04:46:32.045146 systemd[1]: Detected architecture x86-64. May 13 04:46:32.045156 systemd[1]: Running in initrd. May 13 04:46:32.045165 systemd[1]: No hostname configured, using default hostname. May 13 04:46:32.045176 systemd[1]: Hostname set to . May 13 04:46:32.045186 systemd[1]: Initializing machine ID from VM UUID. May 13 04:46:32.045195 systemd[1]: Queued start job for default target initrd.target. May 13 04:46:32.045205 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 04:46:32.045214 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 04:46:32.045224 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 04:46:32.045234 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 04:46:32.045252 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 04:46:32.045264 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 04:46:32.045275 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 04:46:32.045285 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 04:46:32.045295 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 04:46:32.045304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 04:46:32.045316 systemd[1]: Reached target paths.target - Path Units. May 13 04:46:32.045326 systemd[1]: Reached target slices.target - Slice Units. May 13 04:46:32.045335 systemd[1]: Reached target swap.target - Swaps. May 13 04:46:32.045345 systemd[1]: Reached target timers.target - Timer Units. May 13 04:46:32.045355 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 04:46:32.045364 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 04:46:32.045374 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 04:46:32.045384 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 04:46:32.045396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 04:46:32.045405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 04:46:32.045416 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 04:46:32.045425 systemd[1]: Reached target sockets.target - Socket Units. May 13 04:46:32.045435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 04:46:32.045444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 04:46:32.045454 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 04:46:32.045464 systemd[1]: Starting systemd-fsck-usr.service... May 13 04:46:32.045473 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 04:46:32.045485 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 04:46:32.045511 systemd-journald[184]: Collecting audit messages is disabled. May 13 04:46:32.045535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:46:32.045545 systemd-journald[184]: Journal started May 13 04:46:32.045569 systemd-journald[184]: Runtime Journal (/run/log/journal/4d2b275868914017b6205bccdc352a8b) is 8.0M, max 78.3M, 70.3M free. May 13 04:46:32.055016 systemd[1]: Started systemd-journald.service - Journal Service. May 13 04:46:32.056248 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 04:46:32.058735 systemd-modules-load[185]: Inserted module 'overlay' May 13 04:46:32.060941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 04:46:32.064416 systemd[1]: Finished systemd-fsck-usr.service. May 13 04:46:32.075021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 04:46:32.084906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 04:46:32.132379 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 04:46:32.132403 kernel: Bridge firewalling registered May 13 04:46:32.103297 systemd-modules-load[185]: Inserted module 'br_netfilter' May 13 04:46:32.130828 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 04:46:32.131536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:46:32.135074 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 04:46:32.141890 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 04:46:32.143880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 04:46:32.145997 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 04:46:32.151227 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 04:46:32.158404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 04:46:32.166871 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 04:46:32.169340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 04:46:32.170064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 04:46:32.184901 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 04:46:32.195641 systemd-resolved[213]: Positive Trust Anchors: May 13 04:46:32.195662 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 04:46:32.195722 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 04:46:32.199656 systemd-resolved[213]: Defaulting to hostname 'linux'. May 13 04:46:32.200939 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 04:46:32.204874 dracut-cmdline[221]: dracut-dracut-053 May 13 04:46:32.201515 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 04:46:32.206217 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 04:46:32.290735 kernel: SCSI subsystem initialized May 13 04:46:32.301773 kernel: Loading iSCSI transport class v2.0-870. May 13 04:46:32.313766 kernel: iscsi: registered transport (tcp) May 13 04:46:32.336320 kernel: iscsi: registered transport (qla4xxx) May 13 04:46:32.336429 kernel: QLogic iSCSI HBA Driver May 13 04:46:32.389791 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 04:46:32.395013 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 04:46:32.446248 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 04:46:32.446330 kernel: device-mapper: uevent: version 1.0.3 May 13 04:46:32.451776 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 04:46:32.497830 kernel: raid6: sse2x4 gen() 3861 MB/s May 13 04:46:32.515823 kernel: raid6: sse2x2 gen() 11909 MB/s May 13 04:46:32.534182 kernel: raid6: sse2x1 gen() 9577 MB/s May 13 04:46:32.534247 kernel: raid6: using algorithm sse2x2 gen() 11909 MB/s May 13 04:46:32.553383 kernel: raid6: .... xor() 8915 MB/s, rmw enabled May 13 04:46:32.553458 kernel: raid6: using ssse3x2 recovery algorithm May 13 04:46:32.579766 kernel: xor: measuring software checksum speed May 13 04:46:32.582194 kernel: prefetch64-sse : 16722 MB/sec May 13 04:46:32.582213 kernel: generic_sse : 16872 MB/sec May 13 04:46:32.582224 kernel: xor: using function: generic_sse (16872 MB/sec) May 13 04:46:32.768768 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 04:46:32.782514 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 04:46:32.788977 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 04:46:32.801514 systemd-udevd[403]: Using default interface naming scheme 'v255'. May 13 04:46:32.805833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 04:46:32.818019 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 04:46:32.836162 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 13 04:46:32.873352 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 04:46:32.879852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 04:46:32.922354 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 04:46:32.932973 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 04:46:32.949789 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 04:46:32.952020 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 04:46:32.952856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 04:46:32.954647 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 04:46:32.961894 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 04:46:32.977599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 04:46:33.000654 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 13 04:46:33.016788 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 04:46:33.020097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 04:46:33.020249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 04:46:33.022535 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 04:46:33.023345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 04:46:33.045797 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 04:46:33.045821 kernel: GPT:17805311 != 20971519 May 13 04:46:33.045834 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 04:46:33.045847 kernel: GPT:17805311 != 20971519 May 13 04:46:33.045858 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 04:46:33.045869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 04:46:33.023486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:46:33.026276 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:46:33.034977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:46:33.054155 kernel: libata version 3.00 loaded. May 13 04:46:33.063085 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 04:46:33.066771 kernel: scsi host0: ata_piix May 13 04:46:33.071326 kernel: scsi host1: ata_piix May 13 04:46:33.071464 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 04:46:33.071479 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 04:46:33.090941 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) May 13 04:46:33.092733 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (449) May 13 04:46:33.112122 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 04:46:33.125754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:46:33.132531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 04:46:33.137288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 04:46:33.137877 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 04:46:33.144280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 04:46:33.151890 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 04:46:33.154393 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 04:46:33.171670 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 04:46:33.175729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 04:46:33.176001 disk-uuid[505]: Primary Header is updated. May 13 04:46:33.176001 disk-uuid[505]: Secondary Entries is updated. May 13 04:46:33.176001 disk-uuid[505]: Secondary Header is updated. May 13 04:46:34.196970 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 04:46:34.198942 disk-uuid[515]: The operation has completed successfully. May 13 04:46:34.278614 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 04:46:34.279471 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 04:46:34.301832 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 04:46:34.309566 sh[528]: Success May 13 04:46:34.330797 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 04:46:34.429538 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 04:46:34.449020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 04:46:34.452523 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 04:46:34.470065 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 04:46:34.470155 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 04:46:34.470187 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 04:46:34.472153 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 04:46:34.473729 kernel: BTRFS info (device dm-0): using free space tree May 13 04:46:34.490207 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 04:46:34.492315 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 04:46:34.501041 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 04:46:34.504381 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 04:46:34.527734 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:46:34.527784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 04:46:34.534121 kernel: BTRFS info (device vda6): using free space tree May 13 04:46:34.543746 kernel: BTRFS info (device vda6): auto enabling async discard May 13 04:46:34.557294 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 04:46:34.559804 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:46:34.571645 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 04:46:34.582113 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 04:46:34.627885 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 04:46:34.635872 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 04:46:34.655336 systemd-networkd[710]: lo: Link UP May 13 04:46:34.655345 systemd-networkd[710]: lo: Gained carrier May 13 04:46:34.656414 systemd-networkd[710]: Enumeration completed May 13 04:46:34.657254 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 04:46:34.657257 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 04:46:34.658236 systemd-networkd[710]: eth0: Link UP May 13 04:46:34.658239 systemd-networkd[710]: eth0: Gained carrier May 13 04:46:34.658246 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 04:46:34.665380 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 04:46:34.668695 systemd[1]: Reached target network.target - Network. May 13 04:46:34.674760 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.108/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 04:46:34.726804 ignition[641]: Ignition 2.19.0 May 13 04:46:34.727579 ignition[641]: Stage: fetch-offline May 13 04:46:34.728098 ignition[641]: no configs at "/usr/lib/ignition/base.d" May 13 04:46:34.728108 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:46:34.730102 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 04:46:34.728207 ignition[641]: parsed url from cmdline: "" May 13 04:46:34.728211 ignition[641]: no config URL provided May 13 04:46:34.731657 systemd-resolved[213]: Detected conflict on linux IN A 172.24.4.108 May 13 04:46:34.728217 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" May 13 04:46:34.731665 systemd-resolved[213]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. May 13 04:46:34.728225 ignition[641]: no config at "/usr/lib/ignition/user.ign" May 13 04:46:34.728230 ignition[641]: failed to fetch config: resource requires networking May 13 04:46:34.728403 ignition[641]: Ignition finished successfully May 13 04:46:34.735859 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 04:46:34.748769 ignition[722]: Ignition 2.19.0 May 13 04:46:34.748782 ignition[722]: Stage: fetch May 13 04:46:34.748960 ignition[722]: no configs at "/usr/lib/ignition/base.d" May 13 04:46:34.748973 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:46:34.749065 ignition[722]: parsed url from cmdline: "" May 13 04:46:34.749068 ignition[722]: no config URL provided May 13 04:46:34.749073 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" May 13 04:46:34.749081 ignition[722]: no config at "/usr/lib/ignition/user.ign" May 13 04:46:34.749206 ignition[722]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 04:46:34.749372 ignition[722]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 04:46:34.749411 ignition[722]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 04:46:35.026542 ignition[722]: GET result: OK May 13 04:46:35.026942 ignition[722]: parsing config with SHA512: 8fc9d3dd73945638b39c69c3e8f355a480a6e74378357d69de3e4e8acac3821c6143b1cebcf0979011dcf512e383312813bff359d405df5f306e358bd69b2890 May 13 04:46:35.038585 unknown[722]: fetched base config from "system" May 13 04:46:35.040681 ignition[722]: fetch: fetch complete May 13 04:46:35.038627 unknown[722]: fetched base config from "system" May 13 04:46:35.040695 ignition[722]: fetch: fetch passed May 13 04:46:35.038646 unknown[722]: fetched user config from "openstack" May 13 04:46:35.043030 ignition[722]: Ignition finished successfully May 13 04:46:35.046413 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 04:46:35.057035 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 04:46:35.096953 ignition[728]: Ignition 2.19.0 May 13 04:46:35.096970 ignition[728]: Stage: kargs May 13 04:46:35.097365 ignition[728]: no configs at "/usr/lib/ignition/base.d" May 13 04:46:35.097391 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:46:35.099891 ignition[728]: kargs: kargs passed May 13 04:46:35.102122 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 04:46:35.099990 ignition[728]: Ignition finished successfully May 13 04:46:35.117498 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 04:46:35.147117 ignition[734]: Ignition 2.19.0 May 13 04:46:35.147143 ignition[734]: Stage: disks May 13 04:46:35.147557 ignition[734]: no configs at "/usr/lib/ignition/base.d" May 13 04:46:35.147583 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:46:35.150049 ignition[734]: disks: disks passed May 13 04:46:35.152609 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 04:46:35.150174 ignition[734]: Ignition finished successfully May 13 04:46:35.155636 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 04:46:35.157496 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 04:46:35.160310 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 04:46:35.162839 systemd[1]: Reached target sysinit.target - System Initialization. May 13 04:46:35.165819 systemd[1]: Reached target basic.target - Basic System. May 13 04:46:35.176100 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 04:46:35.209346 systemd-fsck[743]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 13 04:46:35.219794 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 04:46:35.226896 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 04:46:35.398757 kernel: EXT4-fs (vda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 04:46:35.399174 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 04:46:35.400704 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 04:46:35.407780 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 04:46:35.409802 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 04:46:35.411959 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 04:46:35.413969 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 13 04:46:35.416768 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 04:46:35.416805 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 04:46:35.426094 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 04:46:35.429847 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 04:46:35.433452 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (751) May 13 04:46:35.444240 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:46:35.444320 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 04:46:35.444350 kernel: BTRFS info (device vda6): using free space tree May 13 04:46:35.462780 kernel: BTRFS info (device vda6): auto enabling async discard May 13 04:46:35.470613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 04:46:35.563144 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory May 13 04:46:35.572904 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory May 13 04:46:35.578322 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory May 13 04:46:35.586293 initrd-setup-root[800]: cut: /sysroot/etc/gshadow: No such file or directory May 13 04:46:35.703232 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 04:46:35.708889 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 04:46:35.713485 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 04:46:35.718609 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 04:46:35.720222 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:46:35.744083 ignition[867]: INFO : Ignition 2.19.0 May 13 04:46:35.744083 ignition[867]: INFO : Stage: mount May 13 04:46:35.744083 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 04:46:35.744083 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:46:35.749251 ignition[867]: INFO : mount: mount passed May 13 04:46:35.749251 ignition[867]: INFO : Ignition finished successfully May 13 04:46:35.749563 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 04:46:35.758048 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 04:46:36.265929 systemd-networkd[710]: eth0: Gained IPv6LL May 13 04:46:42.654819 coreos-metadata[753]: May 13 04:46:42.654 WARN failed to locate config-drive, using the metadata service API instead May 13 04:46:42.700080 coreos-metadata[753]: May 13 04:46:42.699 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 04:46:42.718316 coreos-metadata[753]: May 13 04:46:42.718 INFO Fetch successful May 13 04:46:42.718316 coreos-metadata[753]: May 13 04:46:42.718 INFO wrote hostname ci-4081-3-3-n-d261562a0f.novalocal to /sysroot/etc/hostname May 13 04:46:42.722413 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 04:46:42.722745 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 13 04:46:42.737891 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 04:46:42.760136 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 04:46:42.781802 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (884) May 13 04:46:42.790667 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:46:42.790808 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 04:46:42.795375 kernel: BTRFS info (device vda6): using free space tree May 13 04:46:42.807823 kernel: BTRFS info (device vda6): auto enabling async discard May 13 04:46:42.815607 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 04:46:42.853873 ignition[902]: INFO : Ignition 2.19.0 May 13 04:46:42.853873 ignition[902]: INFO : Stage: files May 13 04:46:42.856929 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 04:46:42.856929 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:46:42.856929 ignition[902]: DEBUG : files: compiled without relabeling support, skipping May 13 04:46:42.859248 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 04:46:42.859248 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 04:46:42.863859 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 04:46:42.864846 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 04:46:42.865955 unknown[902]: wrote ssh authorized keys file for user: core May 13 04:46:42.866799 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 04:46:42.867979 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 04:46:42.869157 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 04:46:42.869157 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 04:46:42.869157 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 04:46:42.934349 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 04:46:43.386389 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 04:46:43.386389 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 04:46:43.391271 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 04:46:44.069522 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 04:46:46.540911 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 04:46:46.540911 ignition[902]: INFO : files: op(c): [started] processing unit "containerd.service" May 13 04:46:46.545603 ignition[902]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 04:46:46.545603 ignition[902]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 04:46:46.545603 ignition[902]: INFO : files: op(c): [finished] processing unit "containerd.service" May 13 04:46:46.545603 ignition[902]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 13 04:46:46.545603 ignition[902]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 04:46:46.545603 ignition[902]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 04:46:46.545603 ignition[902]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 13 04:46:46.545603 ignition[902]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 13 04:46:46.545603 ignition[902]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 13 04:46:46.545603 ignition[902]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 04:46:46.545603 ignition[902]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 04:46:46.545603 ignition[902]: INFO : files: files passed May 13 04:46:46.545603 ignition[902]: INFO : Ignition finished successfully May 13 04:46:46.545703 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 04:46:46.559028 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 04:46:46.564787 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 04:46:46.570940 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 04:46:46.571043 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 04:46:46.578016 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 04:46:46.579459 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 04:46:46.584213 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 04:46:46.583266 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 04:46:46.585678 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 04:46:46.595021 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 04:46:46.625928 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 04:46:46.626050 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 04:46:46.626903 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 04:46:46.628544 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 04:46:46.630863 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 04:46:46.642822 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 04:46:46.656426 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 04:46:46.662976 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 04:46:46.674301 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 04:46:46.674393 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 04:46:46.677434 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 04:46:46.678482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 04:46:46.680567 systemd[1]: Stopped target timers.target - Timer Units. May 13 04:46:46.682598 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 04:46:46.682650 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 04:46:46.684963 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 04:46:46.685920 systemd[1]: Stopped target basic.target - Basic System. May 13 04:46:46.688001 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 04:46:46.689737 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 04:46:46.691459 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 04:46:46.693526 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 04:46:46.695644 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 04:46:46.697684 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 04:46:46.699735 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 04:46:46.701805 systemd[1]: Stopped target swap.target - Swaps. May 13 04:46:46.703819 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 04:46:46.703866 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 04:46:46.706105 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 04:46:46.707277 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 04:46:46.709041 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 04:46:46.709894 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 04:46:46.711080 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 04:46:46.711126 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 04:46:46.714876 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 04:46:46.714953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 04:46:46.716215 systemd[1]: ignition-files.service: Deactivated successfully. May 13 04:46:46.716284 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 04:46:46.726869 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 04:46:46.728428 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 04:46:46.728519 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 04:46:46.731780 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 04:46:46.732901 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 04:46:46.732991 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 04:46:46.735345 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 04:46:46.735883 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 04:46:46.745243 ignition[955]: INFO : Ignition 2.19.0 May 13 04:46:46.747366 ignition[955]: INFO : Stage: umount May 13 04:46:46.747366 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 04:46:46.747366 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:46:46.750543 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 04:46:46.753803 ignition[955]: INFO : umount: umount passed May 13 04:46:46.753803 ignition[955]: INFO : Ignition finished successfully May 13 04:46:46.750636 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 04:46:46.751519 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 04:46:46.751591 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 04:46:46.752122 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 04:46:46.752161 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 04:46:46.752653 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 04:46:46.752689 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 04:46:46.754889 systemd[1]: Stopped target network.target - Network. May 13 04:46:46.755584 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 04:46:46.755629 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 04:46:46.756225 systemd[1]: Stopped target paths.target - Path Units. May 13 04:46:46.756664 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 04:46:46.761800 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 04:46:46.762571 systemd[1]: Stopped target slices.target - Slice Units. May 13 04:46:46.763067 systemd[1]: Stopped target sockets.target - Socket Units. May 13 04:46:46.763571 systemd[1]: iscsid.socket: Deactivated successfully. May 13 04:46:46.763602 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 04:46:46.764805 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 04:46:46.764837 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 04:46:46.765418 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 04:46:46.765458 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 04:46:46.766006 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 04:46:46.766044 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 04:46:46.766694 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 04:46:46.770570 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 04:46:46.772948 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 04:46:46.773490 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 04:46:46.773587 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 04:46:46.774805 systemd-networkd[710]: eth0: DHCPv6 lease lost May 13 04:46:46.777931 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 04:46:46.778040 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 04:46:46.779082 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 04:46:46.779160 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 04:46:46.780770 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 04:46:46.781054 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 04:46:46.782039 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 04:46:46.782083 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 04:46:46.788848 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 04:46:46.790024 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 04:46:46.790074 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 04:46:46.791571 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 04:46:46.791616 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 04:46:46.792812 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 04:46:46.792850 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 04:46:46.794068 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 04:46:46.794125 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 04:46:46.795332 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 04:46:46.800992 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 04:46:46.801131 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 04:46:46.803129 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 04:46:46.803189 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 04:46:46.805079 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 04:46:46.805108 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 04:46:46.806222 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 04:46:46.806266 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 04:46:46.808868 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 04:46:46.808912 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 04:46:46.809973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 04:46:46.810016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 04:46:46.816880 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 04:46:46.819605 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 04:46:46.819662 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 04:46:46.820860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 04:46:46.820901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:46:46.823409 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 04:46:46.823511 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 04:46:46.824890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 04:46:46.824957 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 04:46:46.826535 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 04:46:46.832878 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 04:46:46.847862 systemd[1]: Switching root. May 13 04:46:46.887290 systemd-journald[184]: Journal stopped May 13 04:46:48.776574 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 13 04:46:48.776635 kernel: SELinux: policy capability network_peer_controls=1 May 13 04:46:48.776669 kernel: SELinux: policy capability open_perms=1 May 13 04:46:48.776700 kernel: SELinux: policy capability extended_socket_class=1 May 13 04:46:48.778080 kernel: SELinux: policy capability always_check_network=0 May 13 04:46:48.778151 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 04:46:48.778164 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 04:46:48.778176 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 04:46:48.778189 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 04:46:48.778201 kernel: audit: type=1403 audit(1747111607.756:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 04:46:48.778214 systemd[1]: Successfully loaded SELinux policy in 78.139ms. May 13 04:46:48.778237 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.841ms. May 13 04:46:48.778255 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 04:46:48.778269 systemd[1]: Detected virtualization kvm. May 13 04:46:48.778282 systemd[1]: Detected architecture x86-64. May 13 04:46:48.778294 systemd[1]: Detected first boot. May 13 04:46:48.778307 systemd[1]: Hostname set to . May 13 04:46:48.778320 systemd[1]: Initializing machine ID from VM UUID. May 13 04:46:48.778333 zram_generator::config[1015]: No configuration found. May 13 04:46:48.778349 systemd[1]: Populated /etc with preset unit settings. May 13 04:46:48.778362 systemd[1]: Queued start job for default target multi-user.target. May 13 04:46:48.778374 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 04:46:48.778388 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 04:46:48.778402 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 04:46:48.778415 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 04:46:48.778428 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 04:46:48.778442 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 04:46:48.778457 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 04:46:48.778473 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 04:46:48.778486 systemd[1]: Created slice user.slice - User and Session Slice. May 13 04:46:48.778499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 04:46:48.778515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 04:46:48.778528 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 04:46:48.778541 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 04:46:48.778554 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 04:46:48.778567 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 04:46:48.778582 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 04:46:48.778595 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 04:46:48.778608 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 04:46:48.778621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 04:46:48.778634 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 04:46:48.778649 systemd[1]: Reached target slices.target - Slice Units. May 13 04:46:48.778663 systemd[1]: Reached target swap.target - Swaps. May 13 04:46:48.778676 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 04:46:48.778689 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 04:46:48.778703 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 04:46:48.780531 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 04:46:48.780548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 04:46:48.780560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 04:46:48.780572 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 04:46:48.780584 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 04:46:48.780596 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 04:46:48.780613 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 04:46:48.780625 systemd[1]: Mounting media.mount - External Media Directory... May 13 04:46:48.780637 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:46:48.780650 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 04:46:48.780662 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 04:46:48.780673 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 04:46:48.780685 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 04:46:48.780698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 04:46:48.780743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 04:46:48.780759 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 04:46:48.780771 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 04:46:48.780782 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 04:46:48.780794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 04:46:48.780806 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 04:46:48.780818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 04:46:48.780830 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 04:46:48.780842 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 04:46:48.780857 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 13 04:46:48.780868 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 04:46:48.780880 kernel: fuse: init (API version 7.39) May 13 04:46:48.780892 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 04:46:48.780904 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 04:46:48.780916 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 04:46:48.780928 kernel: loop: module loaded May 13 04:46:48.780939 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 04:46:48.780952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:46:48.780966 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 04:46:48.780977 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 04:46:48.780989 systemd[1]: Mounted media.mount - External Media Directory. May 13 04:46:48.781001 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 04:46:48.781031 systemd-journald[1119]: Collecting audit messages is disabled. May 13 04:46:48.781055 kernel: ACPI: bus type drm_connector registered May 13 04:46:48.781067 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 04:46:48.781082 systemd-journald[1119]: Journal started May 13 04:46:48.781106 systemd-journald[1119]: Runtime Journal (/run/log/journal/4d2b275868914017b6205bccdc352a8b) is 8.0M, max 78.3M, 70.3M free. May 13 04:46:48.784756 systemd[1]: Started systemd-journald.service - Journal Service. May 13 04:46:48.786188 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 04:46:48.787015 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 04:46:48.787862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 04:46:48.788589 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 04:46:48.789099 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 04:46:48.789854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 04:46:48.789989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 04:46:48.791029 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 04:46:48.791239 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 04:46:48.792303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 04:46:48.792506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 04:46:48.793356 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 04:46:48.793491 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 04:46:48.794314 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 04:46:48.794560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 04:46:48.795548 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 04:46:48.796370 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 04:46:48.797369 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 04:46:48.806619 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 04:46:48.811825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 04:46:48.814782 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 04:46:48.815780 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 04:46:48.823901 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 04:46:48.826819 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 04:46:48.831053 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 04:46:48.839870 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 04:46:48.840546 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 04:46:48.842124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 04:46:48.853836 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 04:46:48.855756 systemd-journald[1119]: Time spent on flushing to /var/log/journal/4d2b275868914017b6205bccdc352a8b is 40.817ms for 930 entries. May 13 04:46:48.855756 systemd-journald[1119]: System Journal (/var/log/journal/4d2b275868914017b6205bccdc352a8b) is 8.0M, max 584.8M, 576.8M free. May 13 04:46:48.915032 systemd-journald[1119]: Received client request to flush runtime journal. May 13 04:46:48.860022 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 04:46:48.860681 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 04:46:48.863081 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 04:46:48.866160 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 04:46:48.881231 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 04:46:48.886880 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 04:46:48.908601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 04:46:48.911845 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 04:46:48.918259 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 04:46:48.923851 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 13 04:46:48.923868 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 13 04:46:48.928987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 04:46:48.935923 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 04:46:48.967081 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 04:46:48.973951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 04:46:48.989634 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 13 04:46:48.989961 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 13 04:46:48.996102 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 04:46:49.513031 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 04:46:49.522034 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 04:46:49.560043 systemd-udevd[1199]: Using default interface naming scheme 'v255'. May 13 04:46:49.588406 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 04:46:49.607999 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 04:46:49.639797 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1214) May 13 04:46:49.698030 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 04:46:49.728495 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 13 04:46:49.736760 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 04:46:49.750722 kernel: ACPI: button: Power Button [PWRF] May 13 04:46:49.775750 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 04:46:49.782678 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 04:46:49.787764 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 04:46:49.817743 kernel: mousedev: PS/2 mouse device common for all mice May 13 04:46:49.834917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:46:49.837745 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 04:46:49.840754 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 04:46:49.844778 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 04:46:49.845911 kernel: Console: switching to colour dummy device 80x25 May 13 04:46:49.846342 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 04:46:49.846357 kernel: [drm] features: -context_init May 13 04:46:49.849784 kernel: [drm] number of scanouts: 1 May 13 04:46:49.849819 kernel: [drm] number of cap sets: 0 May 13 04:46:49.854728 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 13 04:46:49.858822 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 13 04:46:49.858865 kernel: Console: switching to colour frame buffer device 160x50 May 13 04:46:49.878744 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 13 04:46:49.886849 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 04:46:49.889537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 04:46:49.889752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:46:49.895847 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 04:46:49.897509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:46:49.901736 systemd-networkd[1217]: lo: Link UP May 13 04:46:49.901740 systemd-networkd[1217]: lo: Gained carrier May 13 04:46:49.903357 systemd-networkd[1217]: Enumeration completed May 13 04:46:49.903454 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 04:46:49.905332 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 04:46:49.906055 systemd-networkd[1217]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 04:46:49.906059 systemd-networkd[1217]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 04:46:49.907232 systemd-networkd[1217]: eth0: Link UP May 13 04:46:49.907242 systemd-networkd[1217]: eth0: Gained carrier May 13 04:46:49.907255 systemd-networkd[1217]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 04:46:49.921868 systemd-networkd[1217]: eth0: DHCPv4 address 172.24.4.108/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 04:46:49.923116 lvm[1242]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 04:46:49.954325 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 04:46:49.955551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 04:46:49.958989 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 04:46:49.965737 lvm[1250]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 04:46:49.974463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:46:49.988945 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 04:46:49.989110 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 04:46:49.989201 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 04:46:49.989222 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 04:46:49.989293 systemd[1]: Reached target machines.target - Containers. May 13 04:46:49.991275 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 04:46:49.996826 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 04:46:49.998700 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 04:46:49.999752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 04:46:50.001123 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 04:46:50.005853 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 04:46:50.023030 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 04:46:50.027145 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 04:46:50.038756 kernel: loop0: detected capacity change from 0 to 8 May 13 04:46:50.060299 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 04:46:50.062940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 04:46:50.077468 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 04:46:50.078114 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 04:46:50.095747 kernel: loop1: detected capacity change from 0 to 142488 May 13 04:46:50.184210 kernel: loop2: detected capacity change from 0 to 210664 May 13 04:46:50.237564 kernel: loop3: detected capacity change from 0 to 140768 May 13 04:46:50.295017 kernel: loop4: detected capacity change from 0 to 8 May 13 04:46:50.308927 kernel: loop5: detected capacity change from 0 to 142488 May 13 04:46:50.389109 kernel: loop6: detected capacity change from 0 to 210664 May 13 04:46:50.429932 kernel: loop7: detected capacity change from 0 to 140768 May 13 04:46:50.474435 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 13 04:46:50.475434 (sd-merge)[1275]: Merged extensions into '/usr'. May 13 04:46:50.486785 systemd[1]: Reloading requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... May 13 04:46:50.486819 systemd[1]: Reloading... May 13 04:46:50.559938 zram_generator::config[1302]: No configuration found. May 13 04:46:50.728740 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 04:46:50.800336 systemd[1]: Reloading finished in 312 ms. May 13 04:46:50.816192 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 04:46:50.827830 systemd[1]: Starting ensure-sysext.service... May 13 04:46:50.834630 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 04:46:50.843072 systemd[1]: Reloading requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... May 13 04:46:50.843092 systemd[1]: Reloading... May 13 04:46:50.883122 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 04:46:50.883453 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 04:46:50.884307 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 04:46:50.884613 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. May 13 04:46:50.884681 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. May 13 04:46:50.890326 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. May 13 04:46:50.890337 systemd-tmpfiles[1365]: Skipping /boot May 13 04:46:50.898690 ldconfig[1257]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 04:46:50.908982 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. May 13 04:46:50.908994 systemd-tmpfiles[1365]: Skipping /boot May 13 04:46:50.917740 zram_generator::config[1396]: No configuration found. May 13 04:46:50.985905 systemd-networkd[1217]: eth0: Gained IPv6LL May 13 04:46:51.064818 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 04:46:51.137214 systemd[1]: Reloading finished in 293 ms. May 13 04:46:51.150958 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 04:46:51.153469 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 04:46:51.165264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 04:46:51.190198 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 04:46:51.199020 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 04:46:51.213224 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 04:46:51.226971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 04:46:51.237053 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 04:46:51.248579 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:46:51.249298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 04:46:51.255936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 04:46:51.264829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 04:46:51.279649 augenrules[1486]: No rules May 13 04:46:51.280019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 04:46:51.281597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 04:46:51.283837 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:46:51.286253 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 04:46:51.290890 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 04:46:51.293498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 04:46:51.293766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 04:46:51.296378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 04:46:51.296599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 04:46:51.298401 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 04:46:51.298875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 04:46:51.312412 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 04:46:51.318978 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:46:51.319547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 04:46:51.326195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 04:46:51.333133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 04:46:51.344991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 04:46:51.345768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 04:46:51.350964 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 04:46:51.355390 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:46:51.359410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 04:46:51.359602 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 04:46:51.368936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 04:46:51.369347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 04:46:51.370305 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 04:46:51.370455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 04:46:51.383214 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:46:51.383491 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 04:46:51.393265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 04:46:51.408007 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 04:46:51.416273 systemd-resolved[1473]: Positive Trust Anchors: May 13 04:46:51.416973 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 04:46:51.418809 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 04:46:51.418956 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 04:46:51.425639 systemd-resolved[1473]: Using system hostname 'ci-4081-3-3-n-d261562a0f.novalocal'. May 13 04:46:51.435949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 04:46:51.436654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 04:46:51.436816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:46:51.439411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 04:46:51.441676 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 04:46:51.443750 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 04:46:51.445332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 04:46:51.445575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 04:46:51.449319 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 04:46:51.449491 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 04:46:51.453135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 04:46:51.453384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 04:46:51.455255 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 04:46:51.456269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 04:46:51.461777 systemd[1]: Finished ensure-sysext.service. May 13 04:46:51.470127 systemd[1]: Reached target network.target - Network. May 13 04:46:51.472768 systemd[1]: Reached target network-online.target - Network is Online. May 13 04:46:51.473242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 04:46:51.474819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 04:46:51.474907 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 04:46:51.484944 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 04:46:51.488086 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 04:46:51.541740 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 04:46:51.545255 systemd[1]: Reached target sysinit.target - System Initialization. May 13 04:46:51.545812 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 04:46:51.546324 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 04:46:51.548225 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 04:46:51.550220 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 04:46:51.550428 systemd[1]: Reached target paths.target - Path Units. May 13 04:46:51.552316 systemd[1]: Reached target time-set.target - System Time Set. May 13 04:46:51.554360 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 04:46:51.556378 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 04:46:51.558222 systemd[1]: Reached target timers.target - Timer Units. May 13 04:46:51.560977 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 04:46:51.567165 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 04:46:51.572760 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 04:46:51.573848 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 04:46:51.574332 systemd[1]: Reached target sockets.target - Socket Units. May 13 04:46:51.577195 systemd[1]: Reached target basic.target - Basic System. May 13 04:46:51.579570 systemd[1]: System is tainted: cgroupsv1 May 13 04:46:51.579662 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 04:46:51.579748 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 04:46:51.586845 systemd[1]: Starting containerd.service - containerd container runtime... May 13 04:46:51.595281 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 04:46:51.600850 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 04:46:51.609312 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 04:46:51.628894 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 04:46:51.632068 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 04:46:51.641079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:46:51.650336 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 04:46:51.659250 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 04:46:52.089157 systemd-timesyncd[1534]: Contacted time server 72.30.35.88:123 (0.flatcar.pool.ntp.org). May 13 04:46:52.089220 systemd-timesyncd[1534]: Initial clock synchronization to Tue 2025-05-13 04:46:52.089059 UTC. May 13 04:46:52.089636 systemd-resolved[1473]: Clock change detected. Flushing caches. May 13 04:46:52.093219 dbus-daemon[1541]: [system] SELinux support is enabled May 13 04:46:52.095247 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 04:46:52.097822 jq[1542]: false May 13 04:46:52.100916 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 04:46:52.109291 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 04:46:52.123622 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 04:46:52.127632 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 04:46:52.133208 extend-filesystems[1545]: Found loop4 May 13 04:46:52.133208 extend-filesystems[1545]: Found loop5 May 13 04:46:52.133208 extend-filesystems[1545]: Found loop6 May 13 04:46:52.133208 extend-filesystems[1545]: Found loop7 May 13 04:46:52.133208 extend-filesystems[1545]: Found vda May 13 04:46:52.133208 extend-filesystems[1545]: Found vda1 May 13 04:46:52.133208 extend-filesystems[1545]: Found vda2 May 13 04:46:52.133208 extend-filesystems[1545]: Found vda3 May 13 04:46:52.133208 extend-filesystems[1545]: Found usr May 13 04:46:52.133208 extend-filesystems[1545]: Found vda4 May 13 04:46:52.133208 extend-filesystems[1545]: Found vda6 May 13 04:46:52.133208 extend-filesystems[1545]: Found vda7 May 13 04:46:52.133208 extend-filesystems[1545]: Found vda9 May 13 04:46:52.133208 extend-filesystems[1545]: Checking size of /dev/vda9 May 13 04:46:52.139658 systemd[1]: Starting update-engine.service - Update Engine... May 13 04:46:52.185224 extend-filesystems[1545]: Resized partition /dev/vda9 May 13 04:46:52.164568 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 04:46:52.169366 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 04:46:52.188865 extend-filesystems[1575]: resize2fs 1.47.1 (20-May-2024) May 13 04:46:52.194240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 04:46:52.194461 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 04:46:52.198591 systemd[1]: motdgen.service: Deactivated successfully. May 13 04:46:52.198804 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 04:46:52.212163 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1204) May 13 04:46:52.214022 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 04:46:52.217287 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 04:46:52.217537 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 04:46:52.221787 jq[1571]: true May 13 04:46:52.227149 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 04:46:52.241643 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 04:46:52.290043 update_engine[1564]: I20250513 04:46:52.240648 1564 main.cc:92] Flatcar Update Engine starting May 13 04:46:52.290043 update_engine[1564]: I20250513 04:46:52.243685 1564 update_check_scheduler.cc:74] Next update check in 7m25s May 13 04:46:52.272340 (ntainerd)[1588]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 04:46:52.280411 systemd[1]: Started update-engine.service - Update Engine. May 13 04:46:52.281474 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 04:46:52.290714 jq[1584]: true May 13 04:46:52.281497 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 04:46:52.281935 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 04:46:52.281950 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 04:46:52.284964 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 04:46:52.298139 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 04:46:52.306701 tar[1581]: linux-amd64/helm May 13 04:46:52.330769 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 04:46:52.330769 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 04:46:52.330769 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 04:46:52.329892 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 04:46:52.360384 extend-filesystems[1545]: Resized filesystem in /dev/vda9 May 13 04:46:52.330185 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 04:46:52.331756 systemd-logind[1557]: New seat seat0. May 13 04:46:52.338148 systemd-logind[1557]: Watching system buttons on /dev/input/event1 (Power Button) May 13 04:46:52.338165 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 04:46:52.338352 systemd[1]: Started systemd-logind.service - User Login Management. May 13 04:46:52.434283 bash[1614]: Updated "/home/core/.ssh/authorized_keys" May 13 04:46:52.435170 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 04:46:52.452150 systemd[1]: Starting sshkeys.service... May 13 04:46:52.480781 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 04:46:52.491469 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 04:46:52.497692 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 04:46:52.756633 containerd[1588]: time="2025-05-13T04:46:52.756534674Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 04:46:52.805598 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 04:46:52.815155 containerd[1588]: time="2025-05-13T04:46:52.814413906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.817924480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.817966279Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818010161Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818161886Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818183606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818248228Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818264709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818478019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818498307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818513786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 04:46:52.819003 containerd[1588]: time="2025-05-13T04:46:52.818525337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 04:46:52.819264 containerd[1588]: time="2025-05-13T04:46:52.818604125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 04:46:52.819264 containerd[1588]: time="2025-05-13T04:46:52.818809330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 04:46:52.819264 containerd[1588]: time="2025-05-13T04:46:52.818949313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 04:46:52.819264 containerd[1588]: time="2025-05-13T04:46:52.818967226Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 04:46:52.820273 containerd[1588]: time="2025-05-13T04:46:52.820255182Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 04:46:52.820375 containerd[1588]: time="2025-05-13T04:46:52.820359527Z" level=info msg="metadata content store policy set" policy=shared May 13 04:46:52.838875 containerd[1588]: time="2025-05-13T04:46:52.838815141Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 04:46:52.839045 containerd[1588]: time="2025-05-13T04:46:52.839027249Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 04:46:52.839416 containerd[1588]: time="2025-05-13T04:46:52.839140000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 04:46:52.839416 containerd[1588]: time="2025-05-13T04:46:52.839162472Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 04:46:52.839416 containerd[1588]: time="2025-05-13T04:46:52.839180686Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 04:46:52.839416 containerd[1588]: time="2025-05-13T04:46:52.839342360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840047071Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840156517Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840174691Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840188256Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840203385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840217391Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840230896Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840248279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840274388Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840288845Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840302060Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840315324Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840335753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841386 containerd[1588]: time="2025-05-13T04:46:52.840350250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840363595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840377341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840389944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840403690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840417436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840430871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840444096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840458904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840471768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840484261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840496604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840513666Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840538583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840553752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 04:46:52.841680 containerd[1588]: time="2025-05-13T04:46:52.840569171Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 04:46:52.842003 containerd[1588]: time="2025-05-13T04:46:52.840606731Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 04:46:52.842003 containerd[1588]: time="2025-05-13T04:46:52.840625276Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 04:46:52.842003 containerd[1588]: time="2025-05-13T04:46:52.840637429Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 04:46:52.842003 containerd[1588]: time="2025-05-13T04:46:52.840650854Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 04:46:52.842003 containerd[1588]: time="2025-05-13T04:46:52.840661584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 04:46:52.842003 containerd[1588]: time="2025-05-13T04:46:52.840674017Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 04:46:52.842003 containerd[1588]: time="2025-05-13T04:46:52.840687563Z" level=info msg="NRI interface is disabled by configuration." May 13 04:46:52.842003 containerd[1588]: time="2025-05-13T04:46:52.840698744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 04:46:52.842174 containerd[1588]: time="2025-05-13T04:46:52.841768921Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 04:46:52.842174 containerd[1588]: time="2025-05-13T04:46:52.841863418Z" level=info msg="Connect containerd service" May 13 04:46:52.842174 containerd[1588]: time="2025-05-13T04:46:52.841918972Z" level=info msg="using legacy CRI server" May 13 04:46:52.842174 containerd[1588]: time="2025-05-13T04:46:52.841928440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 04:46:52.842174 containerd[1588]: time="2025-05-13T04:46:52.842047684Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.842547691Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.842831644Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.842875125Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.842952751Z" level=info msg="Start subscribing containerd event" May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.843008916Z" level=info msg="Start recovering state" May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.843059491Z" level=info msg="Start event monitor" May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.843070512Z" level=info msg="Start snapshots syncer" May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.843079579Z" level=info msg="Start cni network conf syncer for default" May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.843086943Z" level=info msg="Start streaming server" May 13 04:46:52.846424 containerd[1588]: time="2025-05-13T04:46:52.843135814Z" level=info msg="containerd successfully booted in 0.087580s" May 13 04:46:52.843319 systemd[1]: Started containerd.service - containerd container runtime. May 13 04:46:52.847838 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 04:46:52.861408 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 04:46:52.883597 systemd[1]: issuegen.service: Deactivated successfully. May 13 04:46:52.884534 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 04:46:52.896505 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 04:46:52.915087 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 04:46:52.926249 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 04:46:52.937304 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 04:46:52.938148 systemd[1]: Reached target getty.target - Login Prompts. May 13 04:46:53.125952 tar[1581]: linux-amd64/LICENSE May 13 04:46:53.125952 tar[1581]: linux-amd64/README.md May 13 04:46:53.136740 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 04:46:53.374595 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 04:46:53.384633 systemd[1]: Started sshd@0-172.24.4.108:22-172.24.4.1:35434.service - OpenSSH per-connection server daemon (172.24.4.1:35434). May 13 04:46:54.347674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:46:54.365800 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:46:54.890466 sshd[1664]: Accepted publickey for core from 172.24.4.1 port 35434 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:46:54.894325 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:46:54.925580 systemd-logind[1557]: New session 1 of user core. May 13 04:46:54.929134 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 04:46:54.943409 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 04:46:54.962874 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 04:46:54.974433 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 04:46:54.981794 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 04:46:55.099423 systemd[1685]: Queued start job for default target default.target. May 13 04:46:55.100002 systemd[1685]: Created slice app.slice - User Application Slice. May 13 04:46:55.100025 systemd[1685]: Reached target paths.target - Paths. May 13 04:46:55.100039 systemd[1685]: Reached target timers.target - Timers. May 13 04:46:55.106068 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 04:46:55.111683 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 04:46:55.112317 systemd[1685]: Reached target sockets.target - Sockets. May 13 04:46:55.112333 systemd[1685]: Reached target basic.target - Basic System. May 13 04:46:55.112368 systemd[1685]: Reached target default.target - Main User Target. May 13 04:46:55.112392 systemd[1685]: Startup finished in 122ms. May 13 04:46:55.113179 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 04:46:55.126282 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 04:46:55.651888 systemd[1]: Started sshd@1-172.24.4.108:22-172.24.4.1:58934.service - OpenSSH per-connection server daemon (172.24.4.1:58934). May 13 04:46:55.765682 kubelet[1675]: E0513 04:46:55.765559 1675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:46:55.770879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:46:55.772045 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:46:56.977530 sshd[1700]: Accepted publickey for core from 172.24.4.1 port 58934 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:46:56.980595 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:46:56.990886 systemd-logind[1557]: New session 2 of user core. May 13 04:46:57.002666 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 04:46:57.523050 sshd[1700]: pam_unix(sshd:session): session closed for user core May 13 04:46:57.538426 systemd[1]: Started sshd@2-172.24.4.108:22-172.24.4.1:58936.service - OpenSSH per-connection server daemon (172.24.4.1:58936). May 13 04:46:57.548249 systemd[1]: sshd@1-172.24.4.108:22-172.24.4.1:58934.service: Deactivated successfully. May 13 04:46:57.552739 systemd[1]: session-2.scope: Deactivated successfully. May 13 04:46:57.555325 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit. May 13 04:46:57.560551 systemd-logind[1557]: Removed session 2. May 13 04:46:57.994502 login[1656]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 04:46:57.994839 login[1657]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 04:46:58.005515 systemd-logind[1557]: New session 3 of user core. May 13 04:46:58.015635 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 04:46:58.021687 systemd-logind[1557]: New session 4 of user core. May 13 04:46:58.035089 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 04:46:58.901173 sshd[1707]: Accepted publickey for core from 172.24.4.1 port 58936 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:46:58.903656 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:46:58.913077 systemd-logind[1557]: New session 5 of user core. May 13 04:46:58.922695 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 04:46:59.118616 coreos-metadata[1539]: May 13 04:46:59.118 WARN failed to locate config-drive, using the metadata service API instead May 13 04:46:59.162184 coreos-metadata[1539]: May 13 04:46:59.161 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 13 04:46:59.452906 coreos-metadata[1539]: May 13 04:46:59.452 INFO Fetch successful May 13 04:46:59.452906 coreos-metadata[1539]: May 13 04:46:59.452 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 04:46:59.466732 coreos-metadata[1539]: May 13 04:46:59.466 INFO Fetch successful May 13 04:46:59.466732 coreos-metadata[1539]: May 13 04:46:59.466 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 13 04:46:59.481823 coreos-metadata[1539]: May 13 04:46:59.481 INFO Fetch successful May 13 04:46:59.481823 coreos-metadata[1539]: May 13 04:46:59.481 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 13 04:46:59.495363 coreos-metadata[1539]: May 13 04:46:59.495 INFO Fetch successful May 13 04:46:59.495363 coreos-metadata[1539]: May 13 04:46:59.495 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 13 04:46:59.509270 coreos-metadata[1539]: May 13 04:46:59.509 INFO Fetch successful May 13 04:46:59.509270 coreos-metadata[1539]: May 13 04:46:59.509 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 13 04:46:59.525135 coreos-metadata[1539]: May 13 04:46:59.525 INFO Fetch successful May 13 04:46:59.534196 sshd[1707]: pam_unix(sshd:session): session closed for user core May 13 04:46:59.547626 systemd[1]: sshd@2-172.24.4.108:22-172.24.4.1:58936.service: Deactivated successfully. May 13 04:46:59.563961 systemd[1]: session-5.scope: Deactivated successfully. May 13 04:46:59.566267 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. May 13 04:46:59.573970 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 04:46:59.576169 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 04:46:59.577747 systemd-logind[1557]: Removed session 5. May 13 04:46:59.598701 coreos-metadata[1627]: May 13 04:46:59.598 WARN failed to locate config-drive, using the metadata service API instead May 13 04:46:59.634144 coreos-metadata[1627]: May 13 04:46:59.634 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 04:46:59.649748 coreos-metadata[1627]: May 13 04:46:59.649 INFO Fetch successful May 13 04:46:59.649748 coreos-metadata[1627]: May 13 04:46:59.649 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 04:46:59.662947 coreos-metadata[1627]: May 13 04:46:59.662 INFO Fetch successful May 13 04:46:59.668531 unknown[1627]: wrote ssh authorized keys file for user: core May 13 04:46:59.728267 update-ssh-keys[1761]: Updated "/home/core/.ssh/authorized_keys" May 13 04:46:59.730528 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 04:46:59.736295 systemd[1]: Finished sshkeys.service. May 13 04:46:59.749777 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 04:46:59.750205 systemd[1]: Startup finished in 17.143s (kernel) + 11.646s (userspace) = 28.790s. May 13 04:47:06.021951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 04:47:06.032306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:47:06.340125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:06.358662 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:47:06.515302 kubelet[1779]: E0513 04:47:06.515148 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:47:06.523292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:47:06.524546 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:47:09.547381 systemd[1]: Started sshd@3-172.24.4.108:22-172.24.4.1:42838.service - OpenSSH per-connection server daemon (172.24.4.1:42838). May 13 04:47:10.711305 sshd[1788]: Accepted publickey for core from 172.24.4.1 port 42838 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:47:10.714039 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:47:10.725651 systemd-logind[1557]: New session 6 of user core. May 13 04:47:10.735646 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 04:47:11.353964 sshd[1788]: pam_unix(sshd:session): session closed for user core May 13 04:47:11.362493 systemd[1]: Started sshd@4-172.24.4.108:22-172.24.4.1:42850.service - OpenSSH per-connection server daemon (172.24.4.1:42850). May 13 04:47:11.365213 systemd[1]: sshd@3-172.24.4.108:22-172.24.4.1:42838.service: Deactivated successfully. May 13 04:47:11.372496 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. May 13 04:47:11.374206 systemd[1]: session-6.scope: Deactivated successfully. May 13 04:47:11.379847 systemd-logind[1557]: Removed session 6. May 13 04:47:12.645204 sshd[1793]: Accepted publickey for core from 172.24.4.1 port 42850 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:47:12.647867 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:47:12.657523 systemd-logind[1557]: New session 7 of user core. May 13 04:47:12.667451 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 04:47:13.470276 sshd[1793]: pam_unix(sshd:session): session closed for user core May 13 04:47:13.483637 systemd[1]: Started sshd@5-172.24.4.108:22-172.24.4.1:42858.service - OpenSSH per-connection server daemon (172.24.4.1:42858). May 13 04:47:13.486080 systemd[1]: sshd@4-172.24.4.108:22-172.24.4.1:42850.service: Deactivated successfully. May 13 04:47:13.493697 systemd[1]: session-7.scope: Deactivated successfully. May 13 04:47:13.495540 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. May 13 04:47:13.500382 systemd-logind[1557]: Removed session 7. May 13 04:47:14.680621 sshd[1801]: Accepted publickey for core from 172.24.4.1 port 42858 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:47:14.683361 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:47:14.694547 systemd-logind[1557]: New session 8 of user core. May 13 04:47:14.704546 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 04:47:15.507293 sshd[1801]: pam_unix(sshd:session): session closed for user core May 13 04:47:15.525651 systemd[1]: Started sshd@6-172.24.4.108:22-172.24.4.1:50392.service - OpenSSH per-connection server daemon (172.24.4.1:50392). May 13 04:47:15.526670 systemd[1]: sshd@5-172.24.4.108:22-172.24.4.1:42858.service: Deactivated successfully. May 13 04:47:15.534913 systemd[1]: session-8.scope: Deactivated successfully. May 13 04:47:15.536862 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit. May 13 04:47:15.540227 systemd-logind[1557]: Removed session 8. May 13 04:47:16.751920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 04:47:16.766331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:47:17.035292 sshd[1809]: Accepted publickey for core from 172.24.4.1 port 50392 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:47:17.041403 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:47:17.078439 systemd-logind[1557]: New session 9 of user core. May 13 04:47:17.081733 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 04:47:17.101312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:17.106396 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:47:17.190651 kubelet[1827]: E0513 04:47:17.190615 1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:47:17.193422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:47:17.193667 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:47:17.523353 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 04:47:17.524188 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 04:47:17.544033 sudo[1837]: pam_unix(sudo:session): session closed for user root May 13 04:47:17.761458 sshd[1809]: pam_unix(sshd:session): session closed for user core May 13 04:47:17.776660 systemd[1]: Started sshd@7-172.24.4.108:22-172.24.4.1:50406.service - OpenSSH per-connection server daemon (172.24.4.1:50406). May 13 04:47:17.781910 systemd[1]: sshd@6-172.24.4.108:22-172.24.4.1:50392.service: Deactivated successfully. May 13 04:47:17.796354 systemd[1]: session-9.scope: Deactivated successfully. May 13 04:47:17.798635 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit. May 13 04:47:17.802272 systemd-logind[1557]: Removed session 9. May 13 04:47:19.020951 sshd[1839]: Accepted publickey for core from 172.24.4.1 port 50406 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:47:19.023942 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:47:19.034542 systemd-logind[1557]: New session 10 of user core. May 13 04:47:19.043592 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 04:47:19.536482 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 04:47:19.537206 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 04:47:19.544781 sudo[1847]: pam_unix(sudo:session): session closed for user root May 13 04:47:19.555947 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 04:47:19.556770 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 04:47:19.593632 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 04:47:19.598177 auditctl[1850]: No rules May 13 04:47:19.599236 systemd[1]: audit-rules.service: Deactivated successfully. May 13 04:47:19.599704 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 04:47:19.612094 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 04:47:19.666466 augenrules[1869]: No rules May 13 04:47:19.667757 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 04:47:19.669856 sudo[1846]: pam_unix(sudo:session): session closed for user root May 13 04:47:19.846367 sshd[1839]: pam_unix(sshd:session): session closed for user core May 13 04:47:19.861672 systemd[1]: Started sshd@8-172.24.4.108:22-172.24.4.1:50422.service - OpenSSH per-connection server daemon (172.24.4.1:50422). May 13 04:47:19.862861 systemd[1]: sshd@7-172.24.4.108:22-172.24.4.1:50406.service: Deactivated successfully. May 13 04:47:19.876174 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit. May 13 04:47:19.877664 systemd[1]: session-10.scope: Deactivated successfully. May 13 04:47:19.881242 systemd-logind[1557]: Removed session 10. May 13 04:47:20.880651 sshd[1875]: Accepted publickey for core from 172.24.4.1 port 50422 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:47:20.883262 sshd[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:47:20.893932 systemd-logind[1557]: New session 11 of user core. May 13 04:47:20.904480 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 04:47:21.358350 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 04:47:21.359027 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 04:47:21.988363 (dockerd)[1898]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 04:47:21.988432 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 04:47:22.445727 dockerd[1898]: time="2025-05-13T04:47:22.445343097Z" level=info msg="Starting up" May 13 04:47:22.623177 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1795722443-merged.mount: Deactivated successfully. May 13 04:47:22.907933 dockerd[1898]: time="2025-05-13T04:47:22.907469895Z" level=info msg="Loading containers: start." May 13 04:47:23.081014 kernel: Initializing XFRM netlink socket May 13 04:47:23.165241 systemd-networkd[1217]: docker0: Link UP May 13 04:47:23.180162 dockerd[1898]: time="2025-05-13T04:47:23.180116253Z" level=info msg="Loading containers: done." May 13 04:47:23.200383 dockerd[1898]: time="2025-05-13T04:47:23.200321088Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 04:47:23.200600 dockerd[1898]: time="2025-05-13T04:47:23.200453796Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 04:47:23.200600 dockerd[1898]: time="2025-05-13T04:47:23.200566398Z" level=info msg="Daemon has completed initialization" May 13 04:47:23.246331 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 04:47:23.247072 dockerd[1898]: time="2025-05-13T04:47:23.247011706Z" level=info msg="API listen on /run/docker.sock" May 13 04:47:25.134036 containerd[1588]: time="2025-05-13T04:47:25.133678195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 04:47:25.831103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242251455.mount: Deactivated successfully. May 13 04:47:27.205501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 04:47:27.213185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:47:27.329703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:27.337327 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:47:27.603661 kubelet[2109]: E0513 04:47:27.603163 2109 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:47:27.606528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:47:27.606741 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:47:27.921890 containerd[1588]: time="2025-05-13T04:47:27.921537007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:27.923039 containerd[1588]: time="2025-05-13T04:47:27.922987403Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" May 13 04:47:27.924422 containerd[1588]: time="2025-05-13T04:47:27.924380749Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:27.927842 containerd[1588]: time="2025-05-13T04:47:27.927799448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:27.930097 containerd[1588]: time="2025-05-13T04:47:27.929117149Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.795399388s" May 13 04:47:27.930097 containerd[1588]: time="2025-05-13T04:47:27.929161293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 04:47:27.955137 containerd[1588]: time="2025-05-13T04:47:27.955110884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 04:47:30.216782 containerd[1588]: time="2025-05-13T04:47:30.215715610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:30.216782 containerd[1588]: time="2025-05-13T04:47:30.217046984Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" May 13 04:47:30.221795 containerd[1588]: time="2025-05-13T04:47:30.221754735Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:30.228175 containerd[1588]: time="2025-05-13T04:47:30.228145956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:30.232202 containerd[1588]: time="2025-05-13T04:47:30.232131363Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.276821165s" May 13 04:47:30.232297 containerd[1588]: time="2025-05-13T04:47:30.232253126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 04:47:30.271033 containerd[1588]: time="2025-05-13T04:47:30.270986674Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 04:47:31.831807 containerd[1588]: time="2025-05-13T04:47:31.831748028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:31.832996 containerd[1588]: time="2025-05-13T04:47:31.832928249Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" May 13 04:47:31.834359 containerd[1588]: time="2025-05-13T04:47:31.834311780Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:31.837999 containerd[1588]: time="2025-05-13T04:47:31.837650977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:31.839843 containerd[1588]: time="2025-05-13T04:47:31.838731197Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.567702562s" May 13 04:47:31.839843 containerd[1588]: time="2025-05-13T04:47:31.838769520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 04:47:31.865251 containerd[1588]: time="2025-05-13T04:47:31.865213740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 04:47:33.314733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125692811.mount: Deactivated successfully. May 13 04:47:34.023587 containerd[1588]: time="2025-05-13T04:47:34.023429732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:34.026453 containerd[1588]: time="2025-05-13T04:47:34.026348584Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" May 13 04:47:34.028094 containerd[1588]: time="2025-05-13T04:47:34.027912509Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:34.034930 containerd[1588]: time="2025-05-13T04:47:34.034832245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:34.037900 containerd[1588]: time="2025-05-13T04:47:34.036500390Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.171227676s" May 13 04:47:34.037900 containerd[1588]: time="2025-05-13T04:47:34.036587867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 04:47:34.096169 containerd[1588]: time="2025-05-13T04:47:34.096105846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 04:47:34.758642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781138033.mount: Deactivated successfully. May 13 04:47:35.999118 containerd[1588]: time="2025-05-13T04:47:35.997864077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:35.999118 containerd[1588]: time="2025-05-13T04:47:35.999076770Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 13 04:47:36.000228 containerd[1588]: time="2025-05-13T04:47:36.000183850Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:36.005178 containerd[1588]: time="2025-05-13T04:47:36.003780421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:36.005178 containerd[1588]: time="2025-05-13T04:47:36.004989405Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.908537667s" May 13 04:47:36.005178 containerd[1588]: time="2025-05-13T04:47:36.005021476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 04:47:36.028455 containerd[1588]: time="2025-05-13T04:47:36.028404943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 04:47:36.610158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027297338.mount: Deactivated successfully. May 13 04:47:36.623025 containerd[1588]: time="2025-05-13T04:47:36.621619828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:36.623374 containerd[1588]: time="2025-05-13T04:47:36.623264260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" May 13 04:47:36.624715 containerd[1588]: time="2025-05-13T04:47:36.624661391Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:36.631849 containerd[1588]: time="2025-05-13T04:47:36.631780942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:36.634208 containerd[1588]: time="2025-05-13T04:47:36.634148811Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 605.690167ms" May 13 04:47:36.634426 containerd[1588]: time="2025-05-13T04:47:36.634384340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 04:47:36.680766 containerd[1588]: time="2025-05-13T04:47:36.680685497Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 04:47:37.322304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2636712355.mount: Deactivated successfully. May 13 04:47:37.707653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 04:47:37.716341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:47:38.072413 update_engine[1564]: I20250513 04:47:37.975661 1564 update_attempter.cc:509] Updating boot flags... May 13 04:47:38.153022 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2240) May 13 04:47:38.520034 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2238) May 13 04:47:38.581216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:38.583962 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:47:38.620009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2238) May 13 04:47:38.700322 kubelet[2260]: E0513 04:47:38.700276 2260 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:47:38.705157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:47:38.705336 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:47:41.282548 containerd[1588]: time="2025-05-13T04:47:41.282434145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:41.283999 containerd[1588]: time="2025-05-13T04:47:41.283814713Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" May 13 04:47:41.285453 containerd[1588]: time="2025-05-13T04:47:41.285391223Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:41.291011 containerd[1588]: time="2025-05-13T04:47:41.289436496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:47:41.291266 containerd[1588]: time="2025-05-13T04:47:41.291227923Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.610477451s" May 13 04:47:41.291348 containerd[1588]: time="2025-05-13T04:47:41.291331470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 04:47:45.490210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:45.505158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:47:45.526142 systemd[1]: Reloading requested from client PID 2352 ('systemctl') (unit session-11.scope)... May 13 04:47:45.526161 systemd[1]: Reloading... May 13 04:47:45.637037 zram_generator::config[2402]: No configuration found. May 13 04:47:45.785179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 04:47:45.868271 systemd[1]: Reloading finished in 341 ms. May 13 04:47:45.911282 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 04:47:45.911361 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 04:47:45.911731 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:45.918154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:47:46.030099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:46.044674 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 04:47:46.302653 kubelet[2469]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 04:47:46.302653 kubelet[2469]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 04:47:46.302653 kubelet[2469]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 04:47:46.307438 kubelet[2469]: I0513 04:47:46.307146 2469 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 04:47:46.600947 kubelet[2469]: I0513 04:47:46.600335 2469 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 04:47:46.602013 kubelet[2469]: I0513 04:47:46.601521 2469 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 04:47:46.602147 kubelet[2469]: I0513 04:47:46.602112 2469 server.go:927] "Client rotation is on, will bootstrap in background" May 13 04:47:46.631173 kubelet[2469]: I0513 04:47:46.630935 2469 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 04:47:46.631642 kubelet[2469]: E0513 04:47:46.631625 2469 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.654560 kubelet[2469]: I0513 04:47:46.654489 2469 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 04:47:46.658135 kubelet[2469]: I0513 04:47:46.658055 2469 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 04:47:46.658536 kubelet[2469]: I0513 04:47:46.658137 2469 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-d261562a0f.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 04:47:46.658660 kubelet[2469]: I0513 04:47:46.658571 2469 topology_manager.go:138] "Creating topology manager with none policy" May 13 04:47:46.658660 kubelet[2469]: I0513 04:47:46.658596 2469 container_manager_linux.go:301] "Creating device plugin manager" May 13 04:47:46.658844 kubelet[2469]: I0513 04:47:46.658817 2469 state_mem.go:36] "Initialized new in-memory state store" May 13 04:47:46.660936 kubelet[2469]: I0513 04:47:46.660897 2469 kubelet.go:400] "Attempting to sync node with API server" May 13 04:47:46.660936 kubelet[2469]: I0513 04:47:46.660943 2469 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 04:47:46.662302 kubelet[2469]: I0513 04:47:46.661025 2469 kubelet.go:312] "Adding apiserver pod source" May 13 04:47:46.662302 kubelet[2469]: I0513 04:47:46.661063 2469 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 04:47:46.670905 kubelet[2469]: W0513 04:47:46.670802 2469 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-d261562a0f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.671346 kubelet[2469]: E0513 04:47:46.671157 2469 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-d261562a0f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.673424 kubelet[2469]: W0513 04:47:46.673352 2469 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.673818 kubelet[2469]: E0513 04:47:46.673617 2469 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.674019 kubelet[2469]: I0513 04:47:46.673954 2469 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 04:47:46.675768 kubelet[2469]: I0513 04:47:46.675721 2469 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 04:47:46.675866 kubelet[2469]: W0513 04:47:46.675804 2469 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 04:47:46.676937 kubelet[2469]: I0513 04:47:46.676908 2469 server.go:1264] "Started kubelet" May 13 04:47:46.679125 kubelet[2469]: I0513 04:47:46.677085 2469 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 04:47:46.679125 kubelet[2469]: I0513 04:47:46.677192 2469 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 04:47:46.679125 kubelet[2469]: I0513 04:47:46.679023 2469 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 04:47:46.679590 kubelet[2469]: I0513 04:47:46.679559 2469 server.go:455] "Adding debug handlers to kubelet server" May 13 04:47:46.685327 kubelet[2469]: E0513 04:47:46.685133 2469 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.108:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.108:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-d261562a0f.novalocal.183efccde337449c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-d261562a0f.novalocal,UID:ci-4081-3-3-n-d261562a0f.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-d261562a0f.novalocal,},FirstTimestamp:2025-05-13 04:47:46.676876444 +0000 UTC m=+0.627505911,LastTimestamp:2025-05-13 04:47:46.676876444 +0000 UTC m=+0.627505911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-d261562a0f.novalocal,}" May 13 04:47:46.686229 kubelet[2469]: I0513 04:47:46.686195 2469 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 04:47:46.690044 kubelet[2469]: E0513 04:47:46.690002 2469 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-d261562a0f.novalocal\" not found" May 13 04:47:46.690044 kubelet[2469]: I0513 04:47:46.690044 2469 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 04:47:46.690214 kubelet[2469]: I0513 04:47:46.690145 2469 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 04:47:46.690214 kubelet[2469]: I0513 04:47:46.690197 2469 reconciler.go:26] "Reconciler: start to sync state" May 13 04:47:46.690594 kubelet[2469]: W0513 04:47:46.690532 2469 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.690594 kubelet[2469]: E0513 04:47:46.690580 2469 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.690786 kubelet[2469]: E0513 04:47:46.690744 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-d261562a0f.novalocal?timeout=10s\": dial tcp 172.24.4.108:6443: connect: connection refused" interval="200ms" May 13 04:47:46.692625 kubelet[2469]: E0513 04:47:46.692581 2469 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 04:47:46.694591 kubelet[2469]: I0513 04:47:46.694549 2469 factory.go:221] Registration of the containerd container factory successfully May 13 04:47:46.694591 kubelet[2469]: I0513 04:47:46.694565 2469 factory.go:221] Registration of the systemd container factory successfully May 13 04:47:46.694766 kubelet[2469]: I0513 04:47:46.694616 2469 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 04:47:46.749186 kubelet[2469]: I0513 04:47:46.749097 2469 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 04:47:46.751123 kubelet[2469]: I0513 04:47:46.751096 2469 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 04:47:46.751184 kubelet[2469]: I0513 04:47:46.751127 2469 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 04:47:46.751184 kubelet[2469]: I0513 04:47:46.751161 2469 kubelet.go:2337] "Starting kubelet main sync loop" May 13 04:47:46.751252 kubelet[2469]: E0513 04:47:46.751202 2469 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 04:47:46.757203 kubelet[2469]: W0513 04:47:46.757142 2469 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.757300 kubelet[2469]: E0513 04:47:46.757209 2469 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:46.765475 kubelet[2469]: I0513 04:47:46.765451 2469 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 04:47:46.765475 kubelet[2469]: I0513 04:47:46.765471 2469 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 04:47:46.765558 kubelet[2469]: I0513 04:47:46.765491 2469 state_mem.go:36] "Initialized new in-memory state store" May 13 04:47:46.770096 kubelet[2469]: I0513 04:47:46.770063 2469 policy_none.go:49] "None policy: Start" May 13 04:47:46.771243 kubelet[2469]: I0513 04:47:46.770855 2469 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 04:47:46.771243 kubelet[2469]: I0513 04:47:46.770891 2469 state_mem.go:35] "Initializing new in-memory state store" May 13 04:47:46.775839 kubelet[2469]: I0513 04:47:46.775819 2469 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 04:47:46.776107 kubelet[2469]: I0513 04:47:46.776073 2469 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 04:47:46.776254 kubelet[2469]: I0513 04:47:46.776242 2469 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 04:47:46.778736 kubelet[2469]: E0513 04:47:46.778719 2469 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-d261562a0f.novalocal\" not found" May 13 04:47:46.792543 kubelet[2469]: I0513 04:47:46.792506 2469 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.792943 kubelet[2469]: E0513 04:47:46.792883 2469 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.108:6443/api/v1/nodes\": dial tcp 172.24.4.108:6443: connect: connection refused" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.851950 kubelet[2469]: I0513 04:47:46.851671 2469 topology_manager.go:215] "Topology Admit Handler" podUID="2e68ade290ae5a782beeb8d4cfd9162c" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.857384 kubelet[2469]: I0513 04:47:46.856414 2469 topology_manager.go:215] "Topology Admit Handler" podUID="7ecd444eddf7fb983335292cc6290cf4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.861451 kubelet[2469]: I0513 04:47:46.860803 2469 topology_manager.go:215] "Topology Admit Handler" podUID="d024d65930c4fdd32283819b880cc149" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.891824 kubelet[2469]: E0513 04:47:46.891711 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-d261562a0f.novalocal?timeout=10s\": dial tcp 172.24.4.108:6443: connect: connection refused" interval="400ms" May 13 04:47:46.892167 kubelet[2469]: I0513 04:47:46.892064 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.892167 kubelet[2469]: I0513 04:47:46.892147 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d024d65930c4fdd32283819b880cc149-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"d024d65930c4fdd32283819b880cc149\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.892352 kubelet[2469]: I0513 04:47:46.892202 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e68ade290ae5a782beeb8d4cfd9162c-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"2e68ade290ae5a782beeb8d4cfd9162c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.892352 kubelet[2469]: I0513 04:47:46.892251 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e68ade290ae5a782beeb8d4cfd9162c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"2e68ade290ae5a782beeb8d4cfd9162c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.892352 kubelet[2469]: I0513 04:47:46.892299 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.892352 kubelet[2469]: I0513 04:47:46.892345 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.892599 kubelet[2469]: I0513 04:47:46.892393 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.892599 kubelet[2469]: I0513 04:47:46.892440 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.892599 kubelet[2469]: I0513 04:47:46.892485 2469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e68ade290ae5a782beeb8d4cfd9162c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"2e68ade290ae5a782beeb8d4cfd9162c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.997584 kubelet[2469]: I0513 04:47:46.997510 2469 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:46.998329 kubelet[2469]: E0513 04:47:46.998230 2469 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.108:6443/api/v1/nodes\": dial tcp 172.24.4.108:6443: connect: connection refused" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:47.168235 containerd[1588]: time="2025-05-13T04:47:47.168047895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal,Uid:2e68ade290ae5a782beeb8d4cfd9162c,Namespace:kube-system,Attempt:0,}" May 13 04:47:47.179392 containerd[1588]: time="2025-05-13T04:47:47.178763075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal,Uid:d024d65930c4fdd32283819b880cc149,Namespace:kube-system,Attempt:0,}" May 13 04:47:47.179392 containerd[1588]: time="2025-05-13T04:47:47.178812529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal,Uid:7ecd444eddf7fb983335292cc6290cf4,Namespace:kube-system,Attempt:0,}" May 13 04:47:47.293623 kubelet[2469]: E0513 04:47:47.293526 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-d261562a0f.novalocal?timeout=10s\": dial tcp 172.24.4.108:6443: connect: connection refused" interval="800ms" May 13 04:47:47.402252 kubelet[2469]: I0513 04:47:47.401677 2469 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:47.403291 kubelet[2469]: E0513 04:47:47.403218 2469 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.108:6443/api/v1/nodes\": dial tcp 172.24.4.108:6443: connect: connection refused" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:47.520718 kubelet[2469]: W0513 04:47:47.520597 2469 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:47.520935 kubelet[2469]: E0513 04:47:47.520734 2469 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:47.615128 kubelet[2469]: W0513 04:47:47.614969 2469 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-d261562a0f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:47.615390 kubelet[2469]: E0513 04:47:47.615168 2469 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-d261562a0f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:47.828461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911983832.mount: Deactivated successfully. May 13 04:47:47.839465 containerd[1588]: time="2025-05-13T04:47:47.839313734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:47:47.842460 containerd[1588]: time="2025-05-13T04:47:47.842359330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 04:47:47.843719 containerd[1588]: time="2025-05-13T04:47:47.843647373Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:47:47.845761 containerd[1588]: time="2025-05-13T04:47:47.845660468Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:47:47.849113 containerd[1588]: time="2025-05-13T04:47:47.848854635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 13 04:47:47.851035 containerd[1588]: time="2025-05-13T04:47:47.850910891Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 04:47:47.852146 containerd[1588]: time="2025-05-13T04:47:47.852046807Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:47:47.859907 containerd[1588]: time="2025-05-13T04:47:47.859807894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:47:47.866510 containerd[1588]: time="2025-05-13T04:47:47.866435019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 698.220098ms" May 13 04:47:47.876134 containerd[1588]: time="2025-05-13T04:47:47.875566917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 696.639791ms" May 13 04:47:47.882415 containerd[1588]: time="2025-05-13T04:47:47.882354524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 703.37519ms" May 13 04:47:48.081355 containerd[1588]: time="2025-05-13T04:47:48.080804179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:47:48.081472 containerd[1588]: time="2025-05-13T04:47:48.081202210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:47:48.082476 containerd[1588]: time="2025-05-13T04:47:48.081485536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:47:48.082476 containerd[1588]: time="2025-05-13T04:47:48.082168055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:47:48.085182 containerd[1588]: time="2025-05-13T04:47:48.085013911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:47:48.085236 containerd[1588]: time="2025-05-13T04:47:48.085197948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:47:48.085354 containerd[1588]: time="2025-05-13T04:47:48.085254635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:47:48.085433 containerd[1588]: time="2025-05-13T04:47:48.085398056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:47:48.089308 kubelet[2469]: W0513 04:47:48.088737 2469 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:48.089308 kubelet[2469]: E0513 04:47:48.088787 2469 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:48.089797 containerd[1588]: time="2025-05-13T04:47:48.089632635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:47:48.089797 containerd[1588]: time="2025-05-13T04:47:48.089720371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:47:48.089797 containerd[1588]: time="2025-05-13T04:47:48.089753142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:47:48.090259 containerd[1588]: time="2025-05-13T04:47:48.090174899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:47:48.095998 kubelet[2469]: E0513 04:47:48.094536 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-d261562a0f.novalocal?timeout=10s\": dial tcp 172.24.4.108:6443: connect: connection refused" interval="1.6s" May 13 04:47:48.174559 containerd[1588]: time="2025-05-13T04:47:48.174514080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal,Uid:d024d65930c4fdd32283819b880cc149,Namespace:kube-system,Attempt:0,} returns sandbox id \"78b5385e64936da740a064299c2a030017556d83dddd91b9606f414567891a5b\"" May 13 04:47:48.182299 containerd[1588]: time="2025-05-13T04:47:48.182263910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal,Uid:7ecd444eddf7fb983335292cc6290cf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d389fb139f9ae538733465b7df632209897eaceb86ccd6c9ea9c40f0f18d682\"" May 13 04:47:48.183132 containerd[1588]: time="2025-05-13T04:47:48.183106461Z" level=info msg="CreateContainer within sandbox \"78b5385e64936da740a064299c2a030017556d83dddd91b9606f414567891a5b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 04:47:48.185113 containerd[1588]: time="2025-05-13T04:47:48.184367914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal,Uid:2e68ade290ae5a782beeb8d4cfd9162c,Namespace:kube-system,Attempt:0,} returns sandbox id \"df5b0f1c8f12ead86492ea7014a4aae7c08418c172cb1266ae1ad7418f37cbf6\"" May 13 04:47:48.186859 containerd[1588]: time="2025-05-13T04:47:48.186540828Z" level=info msg="CreateContainer within sandbox \"2d389fb139f9ae538733465b7df632209897eaceb86ccd6c9ea9c40f0f18d682\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 04:47:48.192172 containerd[1588]: time="2025-05-13T04:47:48.192144083Z" level=info msg="CreateContainer within sandbox \"df5b0f1c8f12ead86492ea7014a4aae7c08418c172cb1266ae1ad7418f37cbf6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 04:47:48.206001 kubelet[2469]: I0513 04:47:48.205719 2469 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:48.206319 kubelet[2469]: E0513 04:47:48.206291 2469 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.108:6443/api/v1/nodes\": dial tcp 172.24.4.108:6443: connect: connection refused" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:48.223279 containerd[1588]: time="2025-05-13T04:47:48.223215785Z" level=info msg="CreateContainer within sandbox \"78b5385e64936da740a064299c2a030017556d83dddd91b9606f414567891a5b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db6b30943fd6a415a7319a604a6433c6c29b1bced352f4ad49e9f810990bb155\"" May 13 04:47:48.223901 kubelet[2469]: W0513 04:47:48.223794 2469 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:48.224230 kubelet[2469]: E0513 04:47:48.224189 2469 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.108:6443: connect: connection refused May 13 04:47:48.225604 containerd[1588]: time="2025-05-13T04:47:48.225566466Z" level=info msg="StartContainer for \"db6b30943fd6a415a7319a604a6433c6c29b1bced352f4ad49e9f810990bb155\"" May 13 04:47:48.238529 containerd[1588]: time="2025-05-13T04:47:48.238478535Z" level=info msg="CreateContainer within sandbox \"2d389fb139f9ae538733465b7df632209897eaceb86ccd6c9ea9c40f0f18d682\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6fe375982cd43a551f3db0e470515c3517364bbbfd892e6b675986d0ca6db0c8\"" May 13 04:47:48.239108 containerd[1588]: time="2025-05-13T04:47:48.239082115Z" level=info msg="StartContainer for \"6fe375982cd43a551f3db0e470515c3517364bbbfd892e6b675986d0ca6db0c8\"" May 13 04:47:48.244418 containerd[1588]: time="2025-05-13T04:47:48.244378019Z" level=info msg="CreateContainer within sandbox \"df5b0f1c8f12ead86492ea7014a4aae7c08418c172cb1266ae1ad7418f37cbf6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b8532418396976b83082df5849364bf3c7fbf973b9ca5ad858944f0d9e496ddf\"" May 13 04:47:48.245559 containerd[1588]: time="2025-05-13T04:47:48.245542358Z" level=info msg="StartContainer for \"b8532418396976b83082df5849364bf3c7fbf973b9ca5ad858944f0d9e496ddf\"" May 13 04:47:48.359342 containerd[1588]: time="2025-05-13T04:47:48.359231921Z" level=info msg="StartContainer for \"db6b30943fd6a415a7319a604a6433c6c29b1bced352f4ad49e9f810990bb155\" returns successfully" May 13 04:47:48.368873 containerd[1588]: time="2025-05-13T04:47:48.368745501Z" level=info msg="StartContainer for \"b8532418396976b83082df5849364bf3c7fbf973b9ca5ad858944f0d9e496ddf\" returns successfully" May 13 04:47:48.389444 containerd[1588]: time="2025-05-13T04:47:48.389403673Z" level=info msg="StartContainer for \"6fe375982cd43a551f3db0e470515c3517364bbbfd892e6b675986d0ca6db0c8\" returns successfully" May 13 04:47:49.809605 kubelet[2469]: I0513 04:47:49.809556 2469 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:50.530018 kubelet[2469]: E0513 04:47:50.528691 2469 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-d261562a0f.novalocal\" not found" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:50.636218 kubelet[2469]: I0513 04:47:50.636127 2469 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:50.665987 kubelet[2469]: I0513 04:47:50.665303 2469 apiserver.go:52] "Watching apiserver" May 13 04:47:50.691285 kubelet[2469]: I0513 04:47:50.691246 2469 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 04:47:53.143843 kubelet[2469]: W0513 04:47:53.142534 2469 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:47:53.143843 kubelet[2469]: W0513 04:47:53.143262 2469 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:47:53.284630 systemd[1]: Reloading requested from client PID 2741 ('systemctl') (unit session-11.scope)... May 13 04:47:53.284963 systemd[1]: Reloading... May 13 04:47:53.363156 zram_generator::config[2777]: No configuration found. May 13 04:47:53.524681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 04:47:53.617649 systemd[1]: Reloading finished in 332 ms. May 13 04:47:53.648598 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:47:53.649322 kubelet[2469]: I0513 04:47:53.649160 2469 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 04:47:53.661375 systemd[1]: kubelet.service: Deactivated successfully. May 13 04:47:53.661743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:53.669307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:47:53.817950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:47:53.829329 (kubelet)[2854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 04:47:53.876323 kubelet[2854]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 04:47:53.876323 kubelet[2854]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 04:47:53.876323 kubelet[2854]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 04:47:53.876755 kubelet[2854]: I0513 04:47:53.876365 2854 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 04:47:53.881841 kubelet[2854]: I0513 04:47:53.881813 2854 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 04:47:53.881841 kubelet[2854]: I0513 04:47:53.881837 2854 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 04:47:53.882094 kubelet[2854]: I0513 04:47:53.882080 2854 server.go:927] "Client rotation is on, will bootstrap in background" May 13 04:47:53.883640 kubelet[2854]: I0513 04:47:53.883624 2854 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 04:47:53.885439 kubelet[2854]: I0513 04:47:53.884734 2854 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 04:47:53.900239 kubelet[2854]: I0513 04:47:53.900217 2854 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 04:47:53.900890 kubelet[2854]: I0513 04:47:53.900863 2854 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 04:47:53.901297 kubelet[2854]: I0513 04:47:53.900949 2854 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-d261562a0f.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 04:47:53.901446 kubelet[2854]: I0513 04:47:53.901435 2854 topology_manager.go:138] "Creating topology manager with none policy" May 13 04:47:53.901544 kubelet[2854]: I0513 04:47:53.901534 2854 container_manager_linux.go:301] "Creating device plugin manager" May 13 04:47:53.901668 kubelet[2854]: I0513 04:47:53.901658 2854 state_mem.go:36] "Initialized new in-memory state store" May 13 04:47:53.902411 kubelet[2854]: I0513 04:47:53.902387 2854 kubelet.go:400] "Attempting to sync node with API server" May 13 04:47:53.902516 kubelet[2854]: I0513 04:47:53.902505 2854 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 04:47:53.902640 kubelet[2854]: I0513 04:47:53.902605 2854 kubelet.go:312] "Adding apiserver pod source" May 13 04:47:53.902740 kubelet[2854]: I0513 04:47:53.902729 2854 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 04:47:53.906008 kubelet[2854]: I0513 04:47:53.904249 2854 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 04:47:53.906008 kubelet[2854]: I0513 04:47:53.904415 2854 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 04:47:53.906008 kubelet[2854]: I0513 04:47:53.904802 2854 server.go:1264] "Started kubelet" May 13 04:47:53.907462 kubelet[2854]: I0513 04:47:53.906794 2854 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 04:47:53.915017 kubelet[2854]: I0513 04:47:53.914949 2854 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 04:47:53.917100 kubelet[2854]: I0513 04:47:53.917079 2854 server.go:455] "Adding debug handlers to kubelet server" May 13 04:47:53.920014 kubelet[2854]: I0513 04:47:53.917922 2854 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 04:47:53.920014 kubelet[2854]: I0513 04:47:53.918470 2854 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 04:47:53.920343 kubelet[2854]: I0513 04:47:53.920324 2854 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 04:47:53.924759 kubelet[2854]: I0513 04:47:53.922725 2854 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 04:47:53.924759 kubelet[2854]: I0513 04:47:53.922851 2854 reconciler.go:26] "Reconciler: start to sync state" May 13 04:47:53.925219 kubelet[2854]: I0513 04:47:53.925190 2854 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 04:47:53.926211 kubelet[2854]: I0513 04:47:53.926192 2854 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 04:47:53.926256 kubelet[2854]: I0513 04:47:53.926247 2854 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 04:47:53.926284 kubelet[2854]: I0513 04:47:53.926267 2854 kubelet.go:2337] "Starting kubelet main sync loop" May 13 04:47:53.926355 kubelet[2854]: E0513 04:47:53.926330 2854 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 04:47:53.936312 kubelet[2854]: I0513 04:47:53.936153 2854 factory.go:221] Registration of the containerd container factory successfully May 13 04:47:53.936491 kubelet[2854]: I0513 04:47:53.936481 2854 factory.go:221] Registration of the systemd container factory successfully May 13 04:47:53.936875 kubelet[2854]: I0513 04:47:53.936709 2854 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 04:47:53.937386 kubelet[2854]: E0513 04:47:53.937358 2854 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 04:47:54.002885 kubelet[2854]: I0513 04:47:54.002864 2854 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 04:47:54.003089 kubelet[2854]: I0513 04:47:54.003077 2854 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 04:47:54.003174 kubelet[2854]: I0513 04:47:54.003166 2854 state_mem.go:36] "Initialized new in-memory state store" May 13 04:47:54.003487 kubelet[2854]: I0513 04:47:54.003421 2854 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 04:47:54.003585 kubelet[2854]: I0513 04:47:54.003542 2854 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 04:47:54.003748 kubelet[2854]: I0513 04:47:54.003651 2854 policy_none.go:49] "None policy: Start" May 13 04:47:54.004503 kubelet[2854]: I0513 04:47:54.004442 2854 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 04:47:54.004803 kubelet[2854]: I0513 04:47:54.004461 2854 state_mem.go:35] "Initializing new in-memory state store" May 13 04:47:54.005044 kubelet[2854]: I0513 04:47:54.005031 2854 state_mem.go:75] "Updated machine memory state" May 13 04:47:54.006989 kubelet[2854]: I0513 04:47:54.006221 2854 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 04:47:54.006989 kubelet[2854]: I0513 04:47:54.006371 2854 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 04:47:54.006989 kubelet[2854]: I0513 04:47:54.006458 2854 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 04:47:54.023728 kubelet[2854]: I0513 04:47:54.023705 2854 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.027097 kubelet[2854]: I0513 04:47:54.027067 2854 topology_manager.go:215] "Topology Admit Handler" podUID="2e68ade290ae5a782beeb8d4cfd9162c" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.027321 kubelet[2854]: I0513 04:47:54.027308 2854 topology_manager.go:215] "Topology Admit Handler" podUID="7ecd444eddf7fb983335292cc6290cf4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.027436 kubelet[2854]: I0513 04:47:54.027421 2854 topology_manager.go:215] "Topology Admit Handler" podUID="d024d65930c4fdd32283819b880cc149" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.036411 kubelet[2854]: W0513 04:47:54.036388 2854 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:47:54.043296 kubelet[2854]: W0513 04:47:54.043264 2854 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:47:54.043562 kubelet[2854]: E0513 04:47:54.043516 2854 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.043721 kubelet[2854]: W0513 04:47:54.043694 2854 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:47:54.043828 kubelet[2854]: E0513 04:47:54.043815 2854 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.043935 kubelet[2854]: I0513 04:47:54.043752 2854 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.044189 kubelet[2854]: I0513 04:47:54.044111 2854 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.224162 kubelet[2854]: I0513 04:47:54.223623 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e68ade290ae5a782beeb8d4cfd9162c-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"2e68ade290ae5a782beeb8d4cfd9162c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.224162 kubelet[2854]: I0513 04:47:54.223711 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e68ade290ae5a782beeb8d4cfd9162c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"2e68ade290ae5a782beeb8d4cfd9162c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.224162 kubelet[2854]: I0513 04:47:54.223761 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.224162 kubelet[2854]: I0513 04:47:54.223816 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e68ade290ae5a782beeb8d4cfd9162c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"2e68ade290ae5a782beeb8d4cfd9162c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.224162 kubelet[2854]: I0513 04:47:54.223866 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.228309 kubelet[2854]: I0513 04:47:54.223910 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.228309 kubelet[2854]: I0513 04:47:54.223957 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.228309 kubelet[2854]: I0513 04:47:54.224045 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ecd444eddf7fb983335292cc6290cf4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"7ecd444eddf7fb983335292cc6290cf4\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.228887 kubelet[2854]: I0513 04:47:54.224096 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d024d65930c4fdd32283819b880cc149-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal\" (UID: \"d024d65930c4fdd32283819b880cc149\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:47:54.904275 kubelet[2854]: I0513 04:47:54.904234 2854 apiserver.go:52] "Watching apiserver" May 13 04:47:54.925315 kubelet[2854]: I0513 04:47:54.925226 2854 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 04:47:55.000717 kubelet[2854]: I0513 04:47:55.000632 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-d261562a0f.novalocal" podStartSLOduration=1.000605303 podStartE2EDuration="1.000605303s" podCreationTimestamp="2025-05-13 04:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:47:54.990197367 +0000 UTC m=+1.157457252" watchObservedRunningTime="2025-05-13 04:47:55.000605303 +0000 UTC m=+1.167865198" May 13 04:47:55.018791 kubelet[2854]: I0513 04:47:55.018681 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-d261562a0f.novalocal" podStartSLOduration=2.018661733 podStartE2EDuration="2.018661733s" podCreationTimestamp="2025-05-13 04:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:47:55.000954191 +0000 UTC m=+1.168214086" watchObservedRunningTime="2025-05-13 04:47:55.018661733 +0000 UTC m=+1.185921618" May 13 04:47:55.018949 kubelet[2854]: I0513 04:47:55.018817 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-d261562a0f.novalocal" podStartSLOduration=2.018792249 podStartE2EDuration="2.018792249s" podCreationTimestamp="2025-05-13 04:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:47:55.01722742 +0000 UTC m=+1.184487305" watchObservedRunningTime="2025-05-13 04:47:55.018792249 +0000 UTC m=+1.186052134" May 13 04:48:00.095853 sudo[1882]: pam_unix(sudo:session): session closed for user root May 13 04:48:00.333481 sshd[1875]: pam_unix(sshd:session): session closed for user core May 13 04:48:00.343578 systemd[1]: sshd@8-172.24.4.108:22-172.24.4.1:50422.service: Deactivated successfully. May 13 04:48:00.350474 systemd[1]: session-11.scope: Deactivated successfully. May 13 04:48:00.353326 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit. May 13 04:48:00.355602 systemd-logind[1557]: Removed session 11. May 13 04:48:06.849575 kubelet[2854]: I0513 04:48:06.849362 2854 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 04:48:06.851620 containerd[1588]: time="2025-05-13T04:48:06.851571975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 04:48:06.854634 kubelet[2854]: I0513 04:48:06.852489 2854 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 04:48:07.830051 kubelet[2854]: I0513 04:48:07.829502 2854 topology_manager.go:215] "Topology Admit Handler" podUID="4835be18-c695-42be-9117-e23b4cf076de" podNamespace="kube-system" podName="kube-proxy-jhns6" May 13 04:48:07.923370 kubelet[2854]: I0513 04:48:07.923316 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4835be18-c695-42be-9117-e23b4cf076de-xtables-lock\") pod \"kube-proxy-jhns6\" (UID: \"4835be18-c695-42be-9117-e23b4cf076de\") " pod="kube-system/kube-proxy-jhns6" May 13 04:48:07.923785 kubelet[2854]: I0513 04:48:07.923371 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4835be18-c695-42be-9117-e23b4cf076de-lib-modules\") pod \"kube-proxy-jhns6\" (UID: \"4835be18-c695-42be-9117-e23b4cf076de\") " pod="kube-system/kube-proxy-jhns6" May 13 04:48:07.923785 kubelet[2854]: I0513 04:48:07.923400 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4835be18-c695-42be-9117-e23b4cf076de-kube-proxy\") pod \"kube-proxy-jhns6\" (UID: \"4835be18-c695-42be-9117-e23b4cf076de\") " pod="kube-system/kube-proxy-jhns6" May 13 04:48:07.923785 kubelet[2854]: I0513 04:48:07.923425 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62l64\" (UniqueName: \"kubernetes.io/projected/4835be18-c695-42be-9117-e23b4cf076de-kube-api-access-62l64\") pod \"kube-proxy-jhns6\" (UID: \"4835be18-c695-42be-9117-e23b4cf076de\") " pod="kube-system/kube-proxy-jhns6" May 13 04:48:08.004582 kubelet[2854]: I0513 04:48:08.004530 2854 topology_manager.go:215] "Topology Admit Handler" podUID="726a43a2-0328-4c74-bc6c-d6a62557a1c0" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-ndd2l" May 13 04:48:08.025398 kubelet[2854]: I0513 04:48:08.023599 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/726a43a2-0328-4c74-bc6c-d6a62557a1c0-var-lib-calico\") pod \"tigera-operator-797db67f8-ndd2l\" (UID: \"726a43a2-0328-4c74-bc6c-d6a62557a1c0\") " pod="tigera-operator/tigera-operator-797db67f8-ndd2l" May 13 04:48:08.025398 kubelet[2854]: I0513 04:48:08.023634 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dmd\" (UniqueName: \"kubernetes.io/projected/726a43a2-0328-4c74-bc6c-d6a62557a1c0-kube-api-access-f4dmd\") pod \"tigera-operator-797db67f8-ndd2l\" (UID: \"726a43a2-0328-4c74-bc6c-d6a62557a1c0\") " pod="tigera-operator/tigera-operator-797db67f8-ndd2l" May 13 04:48:08.148777 containerd[1588]: time="2025-05-13T04:48:08.148632769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jhns6,Uid:4835be18-c695-42be-9117-e23b4cf076de,Namespace:kube-system,Attempt:0,}" May 13 04:48:08.187828 containerd[1588]: time="2025-05-13T04:48:08.187709501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:08.188075 containerd[1588]: time="2025-05-13T04:48:08.188017771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:08.188232 containerd[1588]: time="2025-05-13T04:48:08.188207548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:08.188585 containerd[1588]: time="2025-05-13T04:48:08.188546364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:08.242232 containerd[1588]: time="2025-05-13T04:48:08.242181706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jhns6,Uid:4835be18-c695-42be-9117-e23b4cf076de,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd6377d872e709134e6a42ff75c4ca61f6d47e5787994a463a591722ddc91901\"" May 13 04:48:08.246393 containerd[1588]: time="2025-05-13T04:48:08.246350098Z" level=info msg="CreateContainer within sandbox \"bd6377d872e709134e6a42ff75c4ca61f6d47e5787994a463a591722ddc91901\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 04:48:08.274591 containerd[1588]: time="2025-05-13T04:48:08.274399319Z" level=info msg="CreateContainer within sandbox \"bd6377d872e709134e6a42ff75c4ca61f6d47e5787994a463a591722ddc91901\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"deaa2212244d69028330a03c67923d9079512b8e614b20f4e77bd8a71973cef5\"" May 13 04:48:08.277215 containerd[1588]: time="2025-05-13T04:48:08.275783349Z" level=info msg="StartContainer for \"deaa2212244d69028330a03c67923d9079512b8e614b20f4e77bd8a71973cef5\"" May 13 04:48:08.315032 containerd[1588]: time="2025-05-13T04:48:08.314860724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-ndd2l,Uid:726a43a2-0328-4c74-bc6c-d6a62557a1c0,Namespace:tigera-operator,Attempt:0,}" May 13 04:48:08.364118 containerd[1588]: time="2025-05-13T04:48:08.364047044Z" level=info msg="StartContainer for \"deaa2212244d69028330a03c67923d9079512b8e614b20f4e77bd8a71973cef5\" returns successfully" May 13 04:48:08.375995 containerd[1588]: time="2025-05-13T04:48:08.375674494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:08.375995 containerd[1588]: time="2025-05-13T04:48:08.375742804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:08.375995 containerd[1588]: time="2025-05-13T04:48:08.375791886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:08.376886 containerd[1588]: time="2025-05-13T04:48:08.376842530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:08.454480 containerd[1588]: time="2025-05-13T04:48:08.454355682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-ndd2l,Uid:726a43a2-0328-4c74-bc6c-d6a62557a1c0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e71ef5dbb914077cf236e00f1d09213147da2822bf98082fead411bcd87b7bfc\"" May 13 04:48:08.457263 containerd[1588]: time="2025-05-13T04:48:08.456736365Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 04:48:10.076783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250956098.mount: Deactivated successfully. May 13 04:48:11.122163 containerd[1588]: time="2025-05-13T04:48:11.122102443Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:11.123636 containerd[1588]: time="2025-05-13T04:48:11.123585328Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 04:48:11.126310 containerd[1588]: time="2025-05-13T04:48:11.124848751Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:11.127655 containerd[1588]: time="2025-05-13T04:48:11.127309835Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:11.128195 containerd[1588]: time="2025-05-13T04:48:11.128166824Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.671398698s" May 13 04:48:11.128242 containerd[1588]: time="2025-05-13T04:48:11.128196801Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 04:48:11.131994 containerd[1588]: time="2025-05-13T04:48:11.131948779Z" level=info msg="CreateContainer within sandbox \"e71ef5dbb914077cf236e00f1d09213147da2822bf98082fead411bcd87b7bfc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 04:48:11.152863 containerd[1588]: time="2025-05-13T04:48:11.152826389Z" level=info msg="CreateContainer within sandbox \"e71ef5dbb914077cf236e00f1d09213147da2822bf98082fead411bcd87b7bfc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"77679975f8111d50cac48e84169ff98ff425bf44f6389eff49332c50b809a1e0\"" May 13 04:48:11.154339 containerd[1588]: time="2025-05-13T04:48:11.154222061Z" level=info msg="StartContainer for \"77679975f8111d50cac48e84169ff98ff425bf44f6389eff49332c50b809a1e0\"" May 13 04:48:11.187262 systemd[1]: run-containerd-runc-k8s.io-77679975f8111d50cac48e84169ff98ff425bf44f6389eff49332c50b809a1e0-runc.VKp5lw.mount: Deactivated successfully. May 13 04:48:11.221389 containerd[1588]: time="2025-05-13T04:48:11.221357388Z" level=info msg="StartContainer for \"77679975f8111d50cac48e84169ff98ff425bf44f6389eff49332c50b809a1e0\" returns successfully" May 13 04:48:12.070281 kubelet[2854]: I0513 04:48:12.068383 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jhns6" podStartSLOduration=5.068336784 podStartE2EDuration="5.068336784s" podCreationTimestamp="2025-05-13 04:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:48:09.035084808 +0000 UTC m=+15.202344793" watchObservedRunningTime="2025-05-13 04:48:12.068336784 +0000 UTC m=+18.235596719" May 13 04:48:12.070281 kubelet[2854]: I0513 04:48:12.068604 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-ndd2l" podStartSLOduration=2.395636057 podStartE2EDuration="5.068589407s" podCreationTimestamp="2025-05-13 04:48:07 +0000 UTC" firstStartedPulling="2025-05-13 04:48:08.456312079 +0000 UTC m=+14.623571964" lastFinishedPulling="2025-05-13 04:48:11.129265429 +0000 UTC m=+17.296525314" observedRunningTime="2025-05-13 04:48:12.06721199 +0000 UTC m=+18.234471925" watchObservedRunningTime="2025-05-13 04:48:12.068589407 +0000 UTC m=+18.235849343" May 13 04:48:14.630485 kubelet[2854]: I0513 04:48:14.630241 2854 topology_manager.go:215] "Topology Admit Handler" podUID="61ca4322-1521-43b3-8d70-77b93ef13a38" podNamespace="calico-system" podName="calico-typha-5879d4bbff-pcqtm" May 13 04:48:14.665619 kubelet[2854]: I0513 04:48:14.665261 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61ca4322-1521-43b3-8d70-77b93ef13a38-tigera-ca-bundle\") pod \"calico-typha-5879d4bbff-pcqtm\" (UID: \"61ca4322-1521-43b3-8d70-77b93ef13a38\") " pod="calico-system/calico-typha-5879d4bbff-pcqtm" May 13 04:48:14.665619 kubelet[2854]: I0513 04:48:14.665329 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgw45\" (UniqueName: \"kubernetes.io/projected/61ca4322-1521-43b3-8d70-77b93ef13a38-kube-api-access-mgw45\") pod \"calico-typha-5879d4bbff-pcqtm\" (UID: \"61ca4322-1521-43b3-8d70-77b93ef13a38\") " pod="calico-system/calico-typha-5879d4bbff-pcqtm" May 13 04:48:14.665619 kubelet[2854]: I0513 04:48:14.665357 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/61ca4322-1521-43b3-8d70-77b93ef13a38-typha-certs\") pod \"calico-typha-5879d4bbff-pcqtm\" (UID: \"61ca4322-1521-43b3-8d70-77b93ef13a38\") " pod="calico-system/calico-typha-5879d4bbff-pcqtm" May 13 04:48:14.752778 kubelet[2854]: I0513 04:48:14.751393 2854 topology_manager.go:215] "Topology Admit Handler" podUID="19a24e50-e3ce-4edb-a29f-94cf3d8d03b6" podNamespace="calico-system" podName="calico-node-vgmjf" May 13 04:48:14.869002 kubelet[2854]: I0513 04:48:14.867141 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-lib-modules\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869002 kubelet[2854]: I0513 04:48:14.867233 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-cni-bin-dir\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869002 kubelet[2854]: I0513 04:48:14.867266 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-xtables-lock\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869002 kubelet[2854]: I0513 04:48:14.867294 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-node-certs\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869002 kubelet[2854]: I0513 04:48:14.867357 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-cni-log-dir\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869302 kubelet[2854]: I0513 04:48:14.867387 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-flexvol-driver-host\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869302 kubelet[2854]: I0513 04:48:14.867409 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-policysync\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869302 kubelet[2854]: I0513 04:48:14.867447 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-var-lib-calico\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869302 kubelet[2854]: I0513 04:48:14.867473 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-cni-net-dir\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869302 kubelet[2854]: I0513 04:48:14.867497 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-var-run-calico\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869438 kubelet[2854]: I0513 04:48:14.867569 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwl6\" (UniqueName: \"kubernetes.io/projected/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-kube-api-access-gfwl6\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.869438 kubelet[2854]: I0513 04:48:14.867606 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19a24e50-e3ce-4edb-a29f-94cf3d8d03b6-tigera-ca-bundle\") pod \"calico-node-vgmjf\" (UID: \"19a24e50-e3ce-4edb-a29f-94cf3d8d03b6\") " pod="calico-system/calico-node-vgmjf" May 13 04:48:14.948841 kubelet[2854]: I0513 04:48:14.943473 2854 topology_manager.go:215] "Topology Admit Handler" podUID="06093158-05c9-457b-b79c-f692f9759a45" podNamespace="calico-system" podName="csi-node-driver-glr49" May 13 04:48:14.948841 kubelet[2854]: E0513 04:48:14.943867 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:14.949040 containerd[1588]: time="2025-05-13T04:48:14.946319099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5879d4bbff-pcqtm,Uid:61ca4322-1521-43b3-8d70-77b93ef13a38,Namespace:calico-system,Attempt:0,}" May 13 04:48:14.972682 kubelet[2854]: I0513 04:48:14.972298 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06093158-05c9-457b-b79c-f692f9759a45-socket-dir\") pod \"csi-node-driver-glr49\" (UID: \"06093158-05c9-457b-b79c-f692f9759a45\") " pod="calico-system/csi-node-driver-glr49" May 13 04:48:14.972682 kubelet[2854]: I0513 04:48:14.972500 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06093158-05c9-457b-b79c-f692f9759a45-registration-dir\") pod \"csi-node-driver-glr49\" (UID: \"06093158-05c9-457b-b79c-f692f9759a45\") " pod="calico-system/csi-node-driver-glr49" May 13 04:48:14.972891 kubelet[2854]: I0513 04:48:14.972753 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06093158-05c9-457b-b79c-f692f9759a45-kubelet-dir\") pod \"csi-node-driver-glr49\" (UID: \"06093158-05c9-457b-b79c-f692f9759a45\") " pod="calico-system/csi-node-driver-glr49" May 13 04:48:14.976493 kubelet[2854]: I0513 04:48:14.976374 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06093158-05c9-457b-b79c-f692f9759a45-varrun\") pod \"csi-node-driver-glr49\" (UID: \"06093158-05c9-457b-b79c-f692f9759a45\") " pod="calico-system/csi-node-driver-glr49" May 13 04:48:14.992025 kubelet[2854]: I0513 04:48:14.990831 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8sxj\" (UniqueName: \"kubernetes.io/projected/06093158-05c9-457b-b79c-f692f9759a45-kube-api-access-b8sxj\") pod \"csi-node-driver-glr49\" (UID: \"06093158-05c9-457b-b79c-f692f9759a45\") " pod="calico-system/csi-node-driver-glr49" May 13 04:48:14.996474 kubelet[2854]: E0513 04:48:14.996444 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:14.996793 kubelet[2854]: W0513 04:48:14.996774 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:14.996963 kubelet[2854]: E0513 04:48:14.996947 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:14.998378 kubelet[2854]: E0513 04:48:14.998219 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:14.998378 kubelet[2854]: W0513 04:48:14.998234 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:14.998378 kubelet[2854]: E0513 04:48:14.998252 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:14.998637 kubelet[2854]: E0513 04:48:14.998544 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:14.998637 kubelet[2854]: W0513 04:48:14.998557 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:14.998637 kubelet[2854]: E0513 04:48:14.998574 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:14.999055 kubelet[2854]: E0513 04:48:14.999040 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:14.999246 kubelet[2854]: W0513 04:48:14.999231 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:14.999319 kubelet[2854]: E0513 04:48:14.999308 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.002852 kubelet[2854]: E0513 04:48:15.002650 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.003074 kubelet[2854]: W0513 04:48:15.002924 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.003074 kubelet[2854]: E0513 04:48:15.002941 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.007072 kubelet[2854]: E0513 04:48:15.007044 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.007279 kubelet[2854]: W0513 04:48:15.007256 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.007417 kubelet[2854]: E0513 04:48:15.007390 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.008824 kubelet[2854]: E0513 04:48:15.008668 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.008824 kubelet[2854]: W0513 04:48:15.008682 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.008824 kubelet[2854]: E0513 04:48:15.008706 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.010207 kubelet[2854]: E0513 04:48:15.010193 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.010348 kubelet[2854]: W0513 04:48:15.010332 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.010469 kubelet[2854]: E0513 04:48:15.010455 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.033583 containerd[1588]: time="2025-05-13T04:48:15.033414590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:15.034176 containerd[1588]: time="2025-05-13T04:48:15.033820673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:15.034509 containerd[1588]: time="2025-05-13T04:48:15.034405060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:15.037420 containerd[1588]: time="2025-05-13T04:48:15.035174515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:15.043437 kubelet[2854]: E0513 04:48:15.041730 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.044055 kubelet[2854]: W0513 04:48:15.044035 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.046431 kubelet[2854]: E0513 04:48:15.046170 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.046556 kubelet[2854]: W0513 04:48:15.046525 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.047164 kubelet[2854]: E0513 04:48:15.047139 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.047390 kubelet[2854]: W0513 04:48:15.047369 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.047996 kubelet[2854]: E0513 04:48:15.047717 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.047996 kubelet[2854]: E0513 04:48:15.047746 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.049301 kubelet[2854]: E0513 04:48:15.048759 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.052593 kubelet[2854]: E0513 04:48:15.049409 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.052593 kubelet[2854]: W0513 04:48:15.052310 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.052593 kubelet[2854]: E0513 04:48:15.052348 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.053438 kubelet[2854]: E0513 04:48:15.053171 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.053438 kubelet[2854]: W0513 04:48:15.053184 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.053438 kubelet[2854]: E0513 04:48:15.053303 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.053997 kubelet[2854]: E0513 04:48:15.053777 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.053997 kubelet[2854]: W0513 04:48:15.053789 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.053997 kubelet[2854]: E0513 04:48:15.053808 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.054716 kubelet[2854]: E0513 04:48:15.054305 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.054716 kubelet[2854]: W0513 04:48:15.054318 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.054716 kubelet[2854]: E0513 04:48:15.054333 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.055054 kubelet[2854]: E0513 04:48:15.055042 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.055222 kubelet[2854]: W0513 04:48:15.055208 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.055290 kubelet[2854]: E0513 04:48:15.055278 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.057580 kubelet[2854]: E0513 04:48:15.057555 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.057834 kubelet[2854]: W0513 04:48:15.057672 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.057834 kubelet[2854]: E0513 04:48:15.057689 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.080893 kubelet[2854]: E0513 04:48:15.080776 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.080893 kubelet[2854]: W0513 04:48:15.080815 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.080893 kubelet[2854]: E0513 04:48:15.080833 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.095350 kubelet[2854]: E0513 04:48:15.095297 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.095350 kubelet[2854]: W0513 04:48:15.095332 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.095350 kubelet[2854]: E0513 04:48:15.095358 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.095866 kubelet[2854]: E0513 04:48:15.095840 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.095866 kubelet[2854]: W0513 04:48:15.095853 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.095866 kubelet[2854]: E0513 04:48:15.095866 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.097212 kubelet[2854]: E0513 04:48:15.097021 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.097212 kubelet[2854]: W0513 04:48:15.097039 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.097212 kubelet[2854]: E0513 04:48:15.097052 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.099835 kubelet[2854]: E0513 04:48:15.099799 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.099835 kubelet[2854]: W0513 04:48:15.099825 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.100720 kubelet[2854]: E0513 04:48:15.099888 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.101140 kubelet[2854]: E0513 04:48:15.101098 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.101598 kubelet[2854]: W0513 04:48:15.101489 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.101994 kubelet[2854]: E0513 04:48:15.101775 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.102563 kubelet[2854]: E0513 04:48:15.102528 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.102563 kubelet[2854]: W0513 04:48:15.102557 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.102799 kubelet[2854]: E0513 04:48:15.102619 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.103869 kubelet[2854]: E0513 04:48:15.103739 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.104879 kubelet[2854]: W0513 04:48:15.103756 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.104879 kubelet[2854]: E0513 04:48:15.104837 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.105546 kubelet[2854]: E0513 04:48:15.105364 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.105546 kubelet[2854]: W0513 04:48:15.105384 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.105546 kubelet[2854]: E0513 04:48:15.105513 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.106714 kubelet[2854]: E0513 04:48:15.106289 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.106714 kubelet[2854]: W0513 04:48:15.106309 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.106714 kubelet[2854]: E0513 04:48:15.106682 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.107226 kubelet[2854]: E0513 04:48:15.107207 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.107226 kubelet[2854]: W0513 04:48:15.107222 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.107403 kubelet[2854]: E0513 04:48:15.107309 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.108828 kubelet[2854]: E0513 04:48:15.108790 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.108828 kubelet[2854]: W0513 04:48:15.108814 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.109907 kubelet[2854]: E0513 04:48:15.108962 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.110647 kubelet[2854]: E0513 04:48:15.110190 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.110647 kubelet[2854]: W0513 04:48:15.110641 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.111113 kubelet[2854]: E0513 04:48:15.110934 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.111999 kubelet[2854]: E0513 04:48:15.111952 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.111999 kubelet[2854]: W0513 04:48:15.111990 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.112175 kubelet[2854]: E0513 04:48:15.112079 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.113230 kubelet[2854]: E0513 04:48:15.113057 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.113230 kubelet[2854]: W0513 04:48:15.113080 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.113465 kubelet[2854]: E0513 04:48:15.113356 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.113713 kubelet[2854]: E0513 04:48:15.113612 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.113713 kubelet[2854]: W0513 04:48:15.113624 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.113823 kubelet[2854]: E0513 04:48:15.113802 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.114065 kubelet[2854]: E0513 04:48:15.114030 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.114065 kubelet[2854]: W0513 04:48:15.114042 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.114337 kubelet[2854]: E0513 04:48:15.114276 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.114868 kubelet[2854]: E0513 04:48:15.114857 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.115044 kubelet[2854]: W0513 04:48:15.114946 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.115044 kubelet[2854]: E0513 04:48:15.115014 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.115409 kubelet[2854]: E0513 04:48:15.115330 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.115409 kubelet[2854]: W0513 04:48:15.115341 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.115597 kubelet[2854]: E0513 04:48:15.115515 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.116012 kubelet[2854]: E0513 04:48:15.115941 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.116012 kubelet[2854]: W0513 04:48:15.115952 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.116218 kubelet[2854]: E0513 04:48:15.116103 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.116559 kubelet[2854]: E0513 04:48:15.116488 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.116559 kubelet[2854]: W0513 04:48:15.116511 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.116702 kubelet[2854]: E0513 04:48:15.116656 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.117009 kubelet[2854]: E0513 04:48:15.116957 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.117009 kubelet[2854]: W0513 04:48:15.116968 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.117235 kubelet[2854]: E0513 04:48:15.117222 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.117607 kubelet[2854]: E0513 04:48:15.117595 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.117861 kubelet[2854]: W0513 04:48:15.117775 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.117965 kubelet[2854]: E0513 04:48:15.117916 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.118275 kubelet[2854]: E0513 04:48:15.118234 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.118275 kubelet[2854]: W0513 04:48:15.118246 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.118478 kubelet[2854]: E0513 04:48:15.118465 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.118735 kubelet[2854]: E0513 04:48:15.118687 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.118735 kubelet[2854]: W0513 04:48:15.118698 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.118944 kubelet[2854]: E0513 04:48:15.118900 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.119265 kubelet[2854]: E0513 04:48:15.119210 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.119265 kubelet[2854]: W0513 04:48:15.119221 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.119265 kubelet[2854]: E0513 04:48:15.119231 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.135305 kubelet[2854]: E0513 04:48:15.135098 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:15.135305 kubelet[2854]: W0513 04:48:15.135119 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:15.135305 kubelet[2854]: E0513 04:48:15.135137 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:15.195420 containerd[1588]: time="2025-05-13T04:48:15.195364818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5879d4bbff-pcqtm,Uid:61ca4322-1521-43b3-8d70-77b93ef13a38,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d95e5688159da13ba9c1752bbc6711f835212b5192eba6d974fb8244e1ffad3\"" May 13 04:48:15.198452 containerd[1588]: time="2025-05-13T04:48:15.198221994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 04:48:15.367283 containerd[1588]: time="2025-05-13T04:48:15.366059167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vgmjf,Uid:19a24e50-e3ce-4edb-a29f-94cf3d8d03b6,Namespace:calico-system,Attempt:0,}" May 13 04:48:15.424014 containerd[1588]: time="2025-05-13T04:48:15.423082327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:15.424014 containerd[1588]: time="2025-05-13T04:48:15.423882099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:15.425071 containerd[1588]: time="2025-05-13T04:48:15.423906255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:15.425071 containerd[1588]: time="2025-05-13T04:48:15.424245312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:15.493096 containerd[1588]: time="2025-05-13T04:48:15.493021904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vgmjf,Uid:19a24e50-e3ce-4edb-a29f-94cf3d8d03b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ebb339ecf3605037ae7e6067df4b8850b3f7fd5e1fdc106ecf64a252e039716\"" May 13 04:48:16.928389 kubelet[2854]: E0513 04:48:16.927604 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:18.721099 containerd[1588]: time="2025-05-13T04:48:18.718535565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:18.724566 containerd[1588]: time="2025-05-13T04:48:18.722526156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 04:48:18.725654 containerd[1588]: time="2025-05-13T04:48:18.725551015Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:18.733337 containerd[1588]: time="2025-05-13T04:48:18.732796298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:18.738517 containerd[1588]: time="2025-05-13T04:48:18.737990950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.539699196s" May 13 04:48:18.738517 containerd[1588]: time="2025-05-13T04:48:18.738075500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 04:48:18.740547 containerd[1588]: time="2025-05-13T04:48:18.740505732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 04:48:18.767614 containerd[1588]: time="2025-05-13T04:48:18.767074914Z" level=info msg="CreateContainer within sandbox \"1d95e5688159da13ba9c1752bbc6711f835212b5192eba6d974fb8244e1ffad3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 04:48:18.810226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276514849.mount: Deactivated successfully. May 13 04:48:18.816414 containerd[1588]: time="2025-05-13T04:48:18.816244718Z" level=info msg="CreateContainer within sandbox \"1d95e5688159da13ba9c1752bbc6711f835212b5192eba6d974fb8244e1ffad3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"547a90b4e354a24268cc2e4f5853d36bb7beb7d08a61d36deb778d41cc227c27\"" May 13 04:48:18.819349 containerd[1588]: time="2025-05-13T04:48:18.818199327Z" level=info msg="StartContainer for \"547a90b4e354a24268cc2e4f5853d36bb7beb7d08a61d36deb778d41cc227c27\"" May 13 04:48:18.927042 kubelet[2854]: E0513 04:48:18.926588 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:18.985769 containerd[1588]: time="2025-05-13T04:48:18.984644027Z" level=info msg="StartContainer for \"547a90b4e354a24268cc2e4f5853d36bb7beb7d08a61d36deb778d41cc227c27\" returns successfully" May 13 04:48:19.133530 kubelet[2854]: I0513 04:48:19.133448 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5879d4bbff-pcqtm" podStartSLOduration=1.590344452 podStartE2EDuration="5.133413145s" podCreationTimestamp="2025-05-13 04:48:14 +0000 UTC" firstStartedPulling="2025-05-13 04:48:15.197280085 +0000 UTC m=+21.364539970" lastFinishedPulling="2025-05-13 04:48:18.740348767 +0000 UTC m=+24.907608663" observedRunningTime="2025-05-13 04:48:19.132756463 +0000 UTC m=+25.300016358" watchObservedRunningTime="2025-05-13 04:48:19.133413145 +0000 UTC m=+25.300673040" May 13 04:48:19.191552 kubelet[2854]: E0513 04:48:19.190728 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.191552 kubelet[2854]: W0513 04:48:19.190839 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.191552 kubelet[2854]: E0513 04:48:19.190866 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.192555 kubelet[2854]: E0513 04:48:19.191746 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.192555 kubelet[2854]: W0513 04:48:19.191760 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.192555 kubelet[2854]: E0513 04:48:19.191778 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.193529 kubelet[2854]: E0513 04:48:19.193350 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.193529 kubelet[2854]: W0513 04:48:19.193393 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.193529 kubelet[2854]: E0513 04:48:19.193442 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.193889 kubelet[2854]: E0513 04:48:19.193854 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.194045 kubelet[2854]: W0513 04:48:19.193894 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.194045 kubelet[2854]: E0513 04:48:19.193914 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.194396 kubelet[2854]: E0513 04:48:19.194369 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.194396 kubelet[2854]: W0513 04:48:19.194391 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.194520 kubelet[2854]: E0513 04:48:19.194405 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.194751 kubelet[2854]: E0513 04:48:19.194714 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.194751 kubelet[2854]: W0513 04:48:19.194740 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.194751 kubelet[2854]: E0513 04:48:19.194752 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.195175 kubelet[2854]: E0513 04:48:19.195144 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.195175 kubelet[2854]: W0513 04:48:19.195166 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.195272 kubelet[2854]: E0513 04:48:19.195181 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.195509 kubelet[2854]: E0513 04:48:19.195482 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.195509 kubelet[2854]: W0513 04:48:19.195503 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.195632 kubelet[2854]: E0513 04:48:19.195517 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.195900 kubelet[2854]: E0513 04:48:19.195867 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.195900 kubelet[2854]: W0513 04:48:19.195889 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.196058 kubelet[2854]: E0513 04:48:19.195904 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.196306 kubelet[2854]: E0513 04:48:19.196274 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.196306 kubelet[2854]: W0513 04:48:19.196296 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.196446 kubelet[2854]: E0513 04:48:19.196310 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.196609 kubelet[2854]: E0513 04:48:19.196582 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.196609 kubelet[2854]: W0513 04:48:19.196600 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.196715 kubelet[2854]: E0513 04:48:19.196614 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.196956 kubelet[2854]: E0513 04:48:19.196934 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.196956 kubelet[2854]: W0513 04:48:19.196952 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.197059 kubelet[2854]: E0513 04:48:19.196962 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.197257 kubelet[2854]: E0513 04:48:19.197236 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.197257 kubelet[2854]: W0513 04:48:19.197254 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.197342 kubelet[2854]: E0513 04:48:19.197268 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.197598 kubelet[2854]: E0513 04:48:19.197474 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.197598 kubelet[2854]: W0513 04:48:19.197490 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.197598 kubelet[2854]: E0513 04:48:19.197500 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.197720 kubelet[2854]: E0513 04:48:19.197657 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.197720 kubelet[2854]: W0513 04:48:19.197667 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.197720 kubelet[2854]: E0513 04:48:19.197676 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.237897 kubelet[2854]: E0513 04:48:19.237769 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.237897 kubelet[2854]: W0513 04:48:19.237823 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.237897 kubelet[2854]: E0513 04:48:19.237843 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.239320 kubelet[2854]: E0513 04:48:19.239092 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.239320 kubelet[2854]: W0513 04:48:19.239117 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.239320 kubelet[2854]: E0513 04:48:19.239164 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.239651 kubelet[2854]: E0513 04:48:19.239637 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.239816 kubelet[2854]: W0513 04:48:19.239728 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.239816 kubelet[2854]: E0513 04:48:19.239767 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.240134 kubelet[2854]: E0513 04:48:19.240033 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.240134 kubelet[2854]: W0513 04:48:19.240054 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.240134 kubelet[2854]: E0513 04:48:19.240107 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.240478 kubelet[2854]: E0513 04:48:19.240383 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.240478 kubelet[2854]: W0513 04:48:19.240395 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.240478 kubelet[2854]: E0513 04:48:19.240422 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.241000 kubelet[2854]: E0513 04:48:19.240806 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.241000 kubelet[2854]: W0513 04:48:19.240819 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.241000 kubelet[2854]: E0513 04:48:19.240836 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.241331 kubelet[2854]: E0513 04:48:19.241208 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.241331 kubelet[2854]: W0513 04:48:19.241233 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.241331 kubelet[2854]: E0513 04:48:19.241262 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.241686 kubelet[2854]: E0513 04:48:19.241523 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.241686 kubelet[2854]: W0513 04:48:19.241586 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.241686 kubelet[2854]: E0513 04:48:19.241615 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.241896 kubelet[2854]: E0513 04:48:19.241882 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.242130 kubelet[2854]: W0513 04:48:19.241985 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.242130 kubelet[2854]: E0513 04:48:19.242048 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.242288 kubelet[2854]: E0513 04:48:19.242275 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.242358 kubelet[2854]: W0513 04:48:19.242346 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.242442 kubelet[2854]: E0513 04:48:19.242428 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.242802 kubelet[2854]: E0513 04:48:19.242687 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.242802 kubelet[2854]: W0513 04:48:19.242704 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.242802 kubelet[2854]: E0513 04:48:19.242716 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.243035 kubelet[2854]: E0513 04:48:19.243021 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.243118 kubelet[2854]: W0513 04:48:19.243106 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.243290 kubelet[2854]: E0513 04:48:19.243174 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.243421 kubelet[2854]: E0513 04:48:19.243407 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.243493 kubelet[2854]: W0513 04:48:19.243481 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.243577 kubelet[2854]: E0513 04:48:19.243563 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.243852 kubelet[2854]: E0513 04:48:19.243832 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.243852 kubelet[2854]: W0513 04:48:19.243848 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.243957 kubelet[2854]: E0513 04:48:19.243864 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.244093 kubelet[2854]: E0513 04:48:19.244077 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.244093 kubelet[2854]: W0513 04:48:19.244088 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.244181 kubelet[2854]: E0513 04:48:19.244115 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.244595 kubelet[2854]: E0513 04:48:19.244553 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.244595 kubelet[2854]: W0513 04:48:19.244572 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.244595 kubelet[2854]: E0513 04:48:19.244591 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.245162 kubelet[2854]: E0513 04:48:19.245031 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.245162 kubelet[2854]: W0513 04:48:19.245045 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.245162 kubelet[2854]: E0513 04:48:19.245062 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:19.245419 kubelet[2854]: E0513 04:48:19.245372 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:19.245419 kubelet[2854]: W0513 04:48:19.245384 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:19.245419 kubelet[2854]: E0513 04:48:19.245395 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.116590 kubelet[2854]: I0513 04:48:20.116492 2854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:48:20.212370 kubelet[2854]: E0513 04:48:20.212262 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.212370 kubelet[2854]: W0513 04:48:20.212342 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.212765 kubelet[2854]: E0513 04:48:20.212428 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.213051 kubelet[2854]: E0513 04:48:20.212970 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.213051 kubelet[2854]: W0513 04:48:20.213048 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.213314 kubelet[2854]: E0513 04:48:20.213077 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.213497 kubelet[2854]: E0513 04:48:20.213456 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.213497 kubelet[2854]: W0513 04:48:20.213480 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.213721 kubelet[2854]: E0513 04:48:20.213502 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.213942 kubelet[2854]: E0513 04:48:20.213868 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.213942 kubelet[2854]: W0513 04:48:20.213939 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.214238 kubelet[2854]: E0513 04:48:20.213964 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.214495 kubelet[2854]: E0513 04:48:20.214457 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.214495 kubelet[2854]: W0513 04:48:20.214492 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.214684 kubelet[2854]: E0513 04:48:20.214518 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.214941 kubelet[2854]: E0513 04:48:20.214907 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.214941 kubelet[2854]: W0513 04:48:20.214938 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.215175 kubelet[2854]: E0513 04:48:20.214962 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.215466 kubelet[2854]: E0513 04:48:20.215431 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.215466 kubelet[2854]: W0513 04:48:20.215463 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.215721 kubelet[2854]: E0513 04:48:20.215489 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.215911 kubelet[2854]: E0513 04:48:20.215870 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.215911 kubelet[2854]: W0513 04:48:20.215893 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.216173 kubelet[2854]: E0513 04:48:20.215914 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.216408 kubelet[2854]: E0513 04:48:20.216364 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.216523 kubelet[2854]: W0513 04:48:20.216397 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.216523 kubelet[2854]: E0513 04:48:20.216504 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.217178 kubelet[2854]: E0513 04:48:20.217138 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.217178 kubelet[2854]: W0513 04:48:20.217173 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.217427 kubelet[2854]: E0513 04:48:20.217198 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.217690 kubelet[2854]: E0513 04:48:20.217655 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.217785 kubelet[2854]: W0513 04:48:20.217693 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.217785 kubelet[2854]: E0513 04:48:20.217719 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.218209 kubelet[2854]: E0513 04:48:20.218172 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.218209 kubelet[2854]: W0513 04:48:20.218205 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.218401 kubelet[2854]: E0513 04:48:20.218229 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.218623 kubelet[2854]: E0513 04:48:20.218589 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.218739 kubelet[2854]: W0513 04:48:20.218686 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.218739 kubelet[2854]: E0513 04:48:20.218727 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.219317 kubelet[2854]: E0513 04:48:20.219203 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.219317 kubelet[2854]: W0513 04:48:20.219304 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.219564 kubelet[2854]: E0513 04:48:20.219333 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.219756 kubelet[2854]: E0513 04:48:20.219692 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.219756 kubelet[2854]: W0513 04:48:20.219715 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.219915 kubelet[2854]: E0513 04:48:20.219816 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.246130 kubelet[2854]: E0513 04:48:20.245921 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.246130 kubelet[2854]: W0513 04:48:20.246073 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.246962 kubelet[2854]: E0513 04:48:20.246140 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.246962 kubelet[2854]: E0513 04:48:20.246628 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.246962 kubelet[2854]: W0513 04:48:20.246652 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.246962 kubelet[2854]: E0513 04:48:20.246689 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.247679 kubelet[2854]: E0513 04:48:20.247144 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.247679 kubelet[2854]: W0513 04:48:20.247169 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.247679 kubelet[2854]: E0513 04:48:20.247206 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.247679 kubelet[2854]: E0513 04:48:20.247621 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.247679 kubelet[2854]: W0513 04:48:20.247643 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.247679 kubelet[2854]: E0513 04:48:20.247666 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.249373 kubelet[2854]: E0513 04:48:20.248186 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.249373 kubelet[2854]: W0513 04:48:20.248211 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.249373 kubelet[2854]: E0513 04:48:20.248332 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.249373 kubelet[2854]: E0513 04:48:20.248574 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.249373 kubelet[2854]: W0513 04:48:20.248598 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.249373 kubelet[2854]: E0513 04:48:20.248671 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.249373 kubelet[2854]: E0513 04:48:20.248960 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.249373 kubelet[2854]: W0513 04:48:20.249041 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.249373 kubelet[2854]: E0513 04:48:20.249108 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.251739 kubelet[2854]: E0513 04:48:20.249417 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.251739 kubelet[2854]: W0513 04:48:20.249443 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.251739 kubelet[2854]: E0513 04:48:20.249482 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.251739 kubelet[2854]: E0513 04:48:20.249971 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.251739 kubelet[2854]: W0513 04:48:20.250054 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.251739 kubelet[2854]: E0513 04:48:20.250095 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.251739 kubelet[2854]: E0513 04:48:20.250661 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.251739 kubelet[2854]: W0513 04:48:20.250695 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.251739 kubelet[2854]: E0513 04:48:20.250727 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.251739 kubelet[2854]: E0513 04:48:20.251190 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.253143 kubelet[2854]: W0513 04:48:20.251215 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.253143 kubelet[2854]: E0513 04:48:20.251275 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.253143 kubelet[2854]: E0513 04:48:20.251732 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.253143 kubelet[2854]: W0513 04:48:20.251758 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.253143 kubelet[2854]: E0513 04:48:20.251797 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.253143 kubelet[2854]: E0513 04:48:20.252464 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.253143 kubelet[2854]: W0513 04:48:20.252492 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.253143 kubelet[2854]: E0513 04:48:20.252533 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.254635 kubelet[2854]: E0513 04:48:20.254029 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.254635 kubelet[2854]: W0513 04:48:20.254066 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.254635 kubelet[2854]: E0513 04:48:20.254115 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.254953 kubelet[2854]: E0513 04:48:20.254667 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.254953 kubelet[2854]: W0513 04:48:20.254694 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.254953 kubelet[2854]: E0513 04:48:20.254771 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.255396 kubelet[2854]: E0513 04:48:20.255294 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.255396 kubelet[2854]: W0513 04:48:20.255355 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.255551 kubelet[2854]: E0513 04:48:20.255480 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.257312 kubelet[2854]: E0513 04:48:20.257231 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.257312 kubelet[2854]: W0513 04:48:20.257284 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.257312 kubelet[2854]: E0513 04:48:20.257324 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.257827 kubelet[2854]: E0513 04:48:20.257767 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:48:20.257827 kubelet[2854]: W0513 04:48:20.257807 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:48:20.257827 kubelet[2854]: E0513 04:48:20.257830 2854 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:48:20.929594 kubelet[2854]: E0513 04:48:20.928661 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:21.030113 containerd[1588]: time="2025-05-13T04:48:21.030056126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:21.032310 containerd[1588]: time="2025-05-13T04:48:21.032045190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 04:48:21.034314 containerd[1588]: time="2025-05-13T04:48:21.033119365Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:21.037009 containerd[1588]: time="2025-05-13T04:48:21.036943605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:21.037652 containerd[1588]: time="2025-05-13T04:48:21.037610897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.297053618s" May 13 04:48:21.037775 containerd[1588]: time="2025-05-13T04:48:21.037749818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 04:48:21.041021 containerd[1588]: time="2025-05-13T04:48:21.040958571Z" level=info msg="CreateContainer within sandbox \"0ebb339ecf3605037ae7e6067df4b8850b3f7fd5e1fdc106ecf64a252e039716\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 04:48:21.067564 containerd[1588]: time="2025-05-13T04:48:21.067390115Z" level=info msg="CreateContainer within sandbox \"0ebb339ecf3605037ae7e6067df4b8850b3f7fd5e1fdc106ecf64a252e039716\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e9f28383c3bcbdea92ece73343054afe2604c5d1712e2fd83a0059a7110591c8\"" May 13 04:48:21.069499 containerd[1588]: time="2025-05-13T04:48:21.069355023Z" level=info msg="StartContainer for \"e9f28383c3bcbdea92ece73343054afe2604c5d1712e2fd83a0059a7110591c8\"" May 13 04:48:21.152816 containerd[1588]: time="2025-05-13T04:48:21.152746828Z" level=info msg="StartContainer for \"e9f28383c3bcbdea92ece73343054afe2604c5d1712e2fd83a0059a7110591c8\" returns successfully" May 13 04:48:21.198560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9f28383c3bcbdea92ece73343054afe2604c5d1712e2fd83a0059a7110591c8-rootfs.mount: Deactivated successfully. May 13 04:48:21.851718 containerd[1588]: time="2025-05-13T04:48:21.850925017Z" level=info msg="shim disconnected" id=e9f28383c3bcbdea92ece73343054afe2604c5d1712e2fd83a0059a7110591c8 namespace=k8s.io May 13 04:48:21.851718 containerd[1588]: time="2025-05-13T04:48:21.851333654Z" level=warning msg="cleaning up after shim disconnected" id=e9f28383c3bcbdea92ece73343054afe2604c5d1712e2fd83a0059a7110591c8 namespace=k8s.io May 13 04:48:21.851718 containerd[1588]: time="2025-05-13T04:48:21.851383908Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 04:48:22.136642 containerd[1588]: time="2025-05-13T04:48:22.135829478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 04:48:22.927350 kubelet[2854]: E0513 04:48:22.926864 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:24.927841 kubelet[2854]: E0513 04:48:24.927537 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:26.927956 kubelet[2854]: E0513 04:48:26.927719 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:28.262955 containerd[1588]: time="2025-05-13T04:48:28.262774005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:28.265533 containerd[1588]: time="2025-05-13T04:48:28.264897029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 04:48:28.267226 containerd[1588]: time="2025-05-13T04:48:28.266792315Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:28.271513 containerd[1588]: time="2025-05-13T04:48:28.271058030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:28.272726 containerd[1588]: time="2025-05-13T04:48:28.272018272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.136023333s" May 13 04:48:28.272726 containerd[1588]: time="2025-05-13T04:48:28.272083614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 04:48:28.279736 containerd[1588]: time="2025-05-13T04:48:28.279278465Z" level=info msg="CreateContainer within sandbox \"0ebb339ecf3605037ae7e6067df4b8850b3f7fd5e1fdc106ecf64a252e039716\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 04:48:28.327545 containerd[1588]: time="2025-05-13T04:48:28.327459008Z" level=info msg="CreateContainer within sandbox \"0ebb339ecf3605037ae7e6067df4b8850b3f7fd5e1fdc106ecf64a252e039716\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f06b79d76f19814d584140f6e264e19042869e8220e8ab9dbe94b1f39109a3d4\"" May 13 04:48:28.329257 containerd[1588]: time="2025-05-13T04:48:28.329205745Z" level=info msg="StartContainer for \"f06b79d76f19814d584140f6e264e19042869e8220e8ab9dbe94b1f39109a3d4\"" May 13 04:48:28.452100 containerd[1588]: time="2025-05-13T04:48:28.452027001Z" level=info msg="StartContainer for \"f06b79d76f19814d584140f6e264e19042869e8220e8ab9dbe94b1f39109a3d4\" returns successfully" May 13 04:48:28.928570 kubelet[2854]: E0513 04:48:28.928221 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:29.694291 containerd[1588]: time="2025-05-13T04:48:29.694166391Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 04:48:29.709109 kubelet[2854]: I0513 04:48:29.707655 2854 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 04:48:29.771139 kubelet[2854]: I0513 04:48:29.764748 2854 topology_manager.go:215] "Topology Admit Handler" podUID="9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-86s7p" May 13 04:48:29.773135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f06b79d76f19814d584140f6e264e19042869e8220e8ab9dbe94b1f39109a3d4-rootfs.mount: Deactivated successfully. May 13 04:48:29.785701 kubelet[2854]: I0513 04:48:29.785649 2854 topology_manager.go:215] "Topology Admit Handler" podUID="bc357230-e098-4af5-9f42-e37066b7df6c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lgz9g" May 13 04:48:29.790523 kubelet[2854]: I0513 04:48:29.790223 2854 topology_manager.go:215] "Topology Admit Handler" podUID="de615ac3-0a0d-4ec1-8a3d-4e9726892ff6" podNamespace="calico-apiserver" podName="calico-apiserver-6546d6ff4b-lfnfh" May 13 04:48:29.791584 kubelet[2854]: I0513 04:48:29.791090 2854 topology_manager.go:215] "Topology Admit Handler" podUID="39ecae4e-9a39-49d6-b199-431373bb0575" podNamespace="calico-apiserver" podName="calico-apiserver-6546d6ff4b-rzq8d" May 13 04:48:29.820784 kubelet[2854]: I0513 04:48:29.820733 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de615ac3-0a0d-4ec1-8a3d-4e9726892ff6-calico-apiserver-certs\") pod \"calico-apiserver-6546d6ff4b-lfnfh\" (UID: \"de615ac3-0a0d-4ec1-8a3d-4e9726892ff6\") " pod="calico-apiserver/calico-apiserver-6546d6ff4b-lfnfh" May 13 04:48:29.821053 kubelet[2854]: I0513 04:48:29.821035 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kl9\" (UniqueName: \"kubernetes.io/projected/de615ac3-0a0d-4ec1-8a3d-4e9726892ff6-kube-api-access-j5kl9\") pod \"calico-apiserver-6546d6ff4b-lfnfh\" (UID: \"de615ac3-0a0d-4ec1-8a3d-4e9726892ff6\") " pod="calico-apiserver/calico-apiserver-6546d6ff4b-lfnfh" May 13 04:48:29.821196 kubelet[2854]: I0513 04:48:29.821177 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc357230-e098-4af5-9f42-e37066b7df6c-config-volume\") pod \"coredns-7db6d8ff4d-lgz9g\" (UID: \"bc357230-e098-4af5-9f42-e37066b7df6c\") " pod="kube-system/coredns-7db6d8ff4d-lgz9g" May 13 04:48:29.821340 kubelet[2854]: I0513 04:48:29.821320 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/39ecae4e-9a39-49d6-b199-431373bb0575-calico-apiserver-certs\") pod \"calico-apiserver-6546d6ff4b-rzq8d\" (UID: \"39ecae4e-9a39-49d6-b199-431373bb0575\") " pod="calico-apiserver/calico-apiserver-6546d6ff4b-rzq8d" May 13 04:48:29.821477 kubelet[2854]: I0513 04:48:29.821459 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb-config-volume\") pod \"coredns-7db6d8ff4d-86s7p\" (UID: \"9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb\") " pod="kube-system/coredns-7db6d8ff4d-86s7p" May 13 04:48:29.821642 kubelet[2854]: I0513 04:48:29.821589 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2mvj\" (UniqueName: \"kubernetes.io/projected/9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb-kube-api-access-l2mvj\") pod \"coredns-7db6d8ff4d-86s7p\" (UID: \"9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb\") " pod="kube-system/coredns-7db6d8ff4d-86s7p" May 13 04:48:29.821642 kubelet[2854]: I0513 04:48:29.821616 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfttl\" (UniqueName: \"kubernetes.io/projected/39ecae4e-9a39-49d6-b199-431373bb0575-kube-api-access-pfttl\") pod \"calico-apiserver-6546d6ff4b-rzq8d\" (UID: \"39ecae4e-9a39-49d6-b199-431373bb0575\") " pod="calico-apiserver/calico-apiserver-6546d6ff4b-rzq8d" May 13 04:48:29.821852 kubelet[2854]: I0513 04:48:29.821769 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gxh6\" (UniqueName: \"kubernetes.io/projected/bc357230-e098-4af5-9f42-e37066b7df6c-kube-api-access-2gxh6\") pod \"coredns-7db6d8ff4d-lgz9g\" (UID: \"bc357230-e098-4af5-9f42-e37066b7df6c\") " pod="kube-system/coredns-7db6d8ff4d-lgz9g" May 13 04:48:29.976071 kubelet[2854]: I0513 04:48:29.972567 2854 topology_manager.go:215] "Topology Admit Handler" podUID="d349f609-625b-4e67-ac8a-4cb7771ba298" podNamespace="calico-system" podName="calico-kube-controllers-6d774d8cdb-sghzl" May 13 04:48:30.023571 kubelet[2854]: I0513 04:48:30.023535 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d349f609-625b-4e67-ac8a-4cb7771ba298-tigera-ca-bundle\") pod \"calico-kube-controllers-6d774d8cdb-sghzl\" (UID: \"d349f609-625b-4e67-ac8a-4cb7771ba298\") " pod="calico-system/calico-kube-controllers-6d774d8cdb-sghzl" May 13 04:48:30.024142 kubelet[2854]: I0513 04:48:30.024110 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwfn\" (UniqueName: \"kubernetes.io/projected/d349f609-625b-4e67-ac8a-4cb7771ba298-kube-api-access-zpwfn\") pod \"calico-kube-controllers-6d774d8cdb-sghzl\" (UID: \"d349f609-625b-4e67-ac8a-4cb7771ba298\") " pod="calico-system/calico-kube-controllers-6d774d8cdb-sghzl" May 13 04:48:30.081039 containerd[1588]: time="2025-05-13T04:48:30.080931411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-86s7p,Uid:9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb,Namespace:kube-system,Attempt:0,}" May 13 04:48:30.097499 containerd[1588]: time="2025-05-13T04:48:30.096227036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546d6ff4b-lfnfh,Uid:de615ac3-0a0d-4ec1-8a3d-4e9726892ff6,Namespace:calico-apiserver,Attempt:0,}" May 13 04:48:30.097499 containerd[1588]: time="2025-05-13T04:48:30.097081251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546d6ff4b-rzq8d,Uid:39ecae4e-9a39-49d6-b199-431373bb0575,Namespace:calico-apiserver,Attempt:0,}" May 13 04:48:30.097499 containerd[1588]: time="2025-05-13T04:48:30.097502403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgz9g,Uid:bc357230-e098-4af5-9f42-e37066b7df6c,Namespace:kube-system,Attempt:0,}" May 13 04:48:30.561435 containerd[1588]: time="2025-05-13T04:48:30.561347127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d774d8cdb-sghzl,Uid:d349f609-625b-4e67-ac8a-4cb7771ba298,Namespace:calico-system,Attempt:0,}" May 13 04:48:30.624064 containerd[1588]: time="2025-05-13T04:48:30.623451458Z" level=info msg="shim disconnected" id=f06b79d76f19814d584140f6e264e19042869e8220e8ab9dbe94b1f39109a3d4 namespace=k8s.io May 13 04:48:30.624064 containerd[1588]: time="2025-05-13T04:48:30.623826573Z" level=warning msg="cleaning up after shim disconnected" id=f06b79d76f19814d584140f6e264e19042869e8220e8ab9dbe94b1f39109a3d4 namespace=k8s.io May 13 04:48:30.624064 containerd[1588]: time="2025-05-13T04:48:30.623879972Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 04:48:30.820560 containerd[1588]: time="2025-05-13T04:48:30.819567682Z" level=error msg="Failed to destroy network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.823948 containerd[1588]: time="2025-05-13T04:48:30.822482491Z" level=error msg="encountered an error cleaning up failed sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.823948 containerd[1588]: time="2025-05-13T04:48:30.822554866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgz9g,Uid:bc357230-e098-4af5-9f42-e37066b7df6c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.824088 kubelet[2854]: E0513 04:48:30.823563 2854 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.824088 kubelet[2854]: E0513 04:48:30.823806 2854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lgz9g" May 13 04:48:30.824088 kubelet[2854]: E0513 04:48:30.823858 2854 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lgz9g" May 13 04:48:30.825594 kubelet[2854]: E0513 04:48:30.823926 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lgz9g_kube-system(bc357230-e098-4af5-9f42-e37066b7df6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lgz9g_kube-system(bc357230-e098-4af5-9f42-e37066b7df6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lgz9g" podUID="bc357230-e098-4af5-9f42-e37066b7df6c" May 13 04:48:30.824738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817-shm.mount: Deactivated successfully. May 13 04:48:30.868029 containerd[1588]: time="2025-05-13T04:48:30.867942664Z" level=error msg="Failed to destroy network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.872309 containerd[1588]: time="2025-05-13T04:48:30.870476025Z" level=error msg="encountered an error cleaning up failed sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.873506 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec-shm.mount: Deactivated successfully. May 13 04:48:30.874231 containerd[1588]: time="2025-05-13T04:48:30.874099821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-86s7p,Uid:9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.875095 kubelet[2854]: E0513 04:48:30.875036 2854 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.875297 kubelet[2854]: E0513 04:48:30.875112 2854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-86s7p" May 13 04:48:30.875297 kubelet[2854]: E0513 04:48:30.875149 2854 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-86s7p" May 13 04:48:30.875297 kubelet[2854]: E0513 04:48:30.875222 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-86s7p_kube-system(9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-86s7p_kube-system(9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-86s7p" podUID="9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb" May 13 04:48:30.898818 containerd[1588]: time="2025-05-13T04:48:30.898761276Z" level=error msg="Failed to destroy network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.902548 containerd[1588]: time="2025-05-13T04:48:30.902388658Z" level=error msg="encountered an error cleaning up failed sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.903111 containerd[1588]: time="2025-05-13T04:48:30.903062468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546d6ff4b-lfnfh,Uid:de615ac3-0a0d-4ec1-8a3d-4e9726892ff6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.903903 kubelet[2854]: E0513 04:48:30.903649 2854 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.903903 kubelet[2854]: E0513 04:48:30.903746 2854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6546d6ff4b-lfnfh" May 13 04:48:30.903903 kubelet[2854]: E0513 04:48:30.903774 2854 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6546d6ff4b-lfnfh" May 13 04:48:30.904100 kubelet[2854]: E0513 04:48:30.903821 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6546d6ff4b-lfnfh_calico-apiserver(de615ac3-0a0d-4ec1-8a3d-4e9726892ff6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6546d6ff4b-lfnfh_calico-apiserver(de615ac3-0a0d-4ec1-8a3d-4e9726892ff6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6546d6ff4b-lfnfh" podUID="de615ac3-0a0d-4ec1-8a3d-4e9726892ff6" May 13 04:48:30.904066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1-shm.mount: Deactivated successfully. May 13 04:48:30.913236 containerd[1588]: time="2025-05-13T04:48:30.913181988Z" level=error msg="Failed to destroy network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.915845 containerd[1588]: time="2025-05-13T04:48:30.914598296Z" level=error msg="encountered an error cleaning up failed sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.915845 containerd[1588]: time="2025-05-13T04:48:30.914656604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546d6ff4b-rzq8d,Uid:39ecae4e-9a39-49d6-b199-431373bb0575,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.916048 kubelet[2854]: E0513 04:48:30.914872 2854 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.916048 kubelet[2854]: E0513 04:48:30.914926 2854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6546d6ff4b-rzq8d" May 13 04:48:30.916048 kubelet[2854]: E0513 04:48:30.914948 2854 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6546d6ff4b-rzq8d" May 13 04:48:30.916183 kubelet[2854]: E0513 04:48:30.915721 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6546d6ff4b-rzq8d_calico-apiserver(39ecae4e-9a39-49d6-b199-431373bb0575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6546d6ff4b-rzq8d_calico-apiserver(39ecae4e-9a39-49d6-b199-431373bb0575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6546d6ff4b-rzq8d" podUID="39ecae4e-9a39-49d6-b199-431373bb0575" May 13 04:48:30.917041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f-shm.mount: Deactivated successfully. May 13 04:48:30.930054 containerd[1588]: time="2025-05-13T04:48:30.929886116Z" level=error msg="Failed to destroy network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.930416 containerd[1588]: time="2025-05-13T04:48:30.930034131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glr49,Uid:06093158-05c9-457b-b79c-f692f9759a45,Namespace:calico-system,Attempt:0,}" May 13 04:48:30.931442 containerd[1588]: time="2025-05-13T04:48:30.930604409Z" level=error msg="encountered an error cleaning up failed sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.931442 containerd[1588]: time="2025-05-13T04:48:30.931149130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d774d8cdb-sghzl,Uid:d349f609-625b-4e67-ac8a-4cb7771ba298,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.931618 kubelet[2854]: E0513 04:48:30.931331 2854 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.931618 kubelet[2854]: E0513 04:48:30.931417 2854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d774d8cdb-sghzl" May 13 04:48:30.931618 kubelet[2854]: E0513 04:48:30.931444 2854 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d774d8cdb-sghzl" May 13 04:48:30.931738 kubelet[2854]: E0513 04:48:30.931500 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d774d8cdb-sghzl_calico-system(d349f609-625b-4e67-ac8a-4cb7771ba298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d774d8cdb-sghzl_calico-system(d349f609-625b-4e67-ac8a-4cb7771ba298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d774d8cdb-sghzl" podUID="d349f609-625b-4e67-ac8a-4cb7771ba298" May 13 04:48:30.997122 containerd[1588]: time="2025-05-13T04:48:30.996596515Z" level=error msg="Failed to destroy network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.997122 containerd[1588]: time="2025-05-13T04:48:30.996919063Z" level=error msg="encountered an error cleaning up failed sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.997122 containerd[1588]: time="2025-05-13T04:48:30.997006596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glr49,Uid:06093158-05c9-457b-b79c-f692f9759a45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.997955 kubelet[2854]: E0513 04:48:30.997528 2854 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:30.997955 kubelet[2854]: E0513 04:48:30.997594 2854 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-glr49" May 13 04:48:30.997955 kubelet[2854]: E0513 04:48:30.997640 2854 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-glr49" May 13 04:48:30.999517 kubelet[2854]: E0513 04:48:30.997687 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-glr49_calico-system(06093158-05c9-457b-b79c-f692f9759a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-glr49_calico-system(06093158-05c9-457b-b79c-f692f9759a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:31.163237 kubelet[2854]: I0513 04:48:31.162963 2854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:31.168947 kubelet[2854]: I0513 04:48:31.166580 2854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:31.170802 containerd[1588]: time="2025-05-13T04:48:31.169719161Z" level=info msg="StopPodSandbox for \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\"" May 13 04:48:31.171262 containerd[1588]: time="2025-05-13T04:48:31.171222011Z" level=info msg="StopPodSandbox for \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\"" May 13 04:48:31.171998 containerd[1588]: time="2025-05-13T04:48:31.171825050Z" level=info msg="Ensure that sandbox ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26 in task-service has been cleanup successfully" May 13 04:48:31.172437 containerd[1588]: time="2025-05-13T04:48:31.172244169Z" level=info msg="Ensure that sandbox cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817 in task-service has been cleanup successfully" May 13 04:48:31.180813 kubelet[2854]: I0513 04:48:31.180466 2854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:31.185034 containerd[1588]: time="2025-05-13T04:48:31.184231991Z" level=info msg="StopPodSandbox for \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\"" May 13 04:48:31.187503 containerd[1588]: time="2025-05-13T04:48:31.187446158Z" level=info msg="Ensure that sandbox 947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec in task-service has been cleanup successfully" May 13 04:48:31.225368 containerd[1588]: time="2025-05-13T04:48:31.224173555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 04:48:31.226702 kubelet[2854]: I0513 04:48:31.225715 2854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:31.234998 containerd[1588]: time="2025-05-13T04:48:31.232804977Z" level=info msg="StopPodSandbox for \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\"" May 13 04:48:31.234998 containerd[1588]: time="2025-05-13T04:48:31.233200322Z" level=info msg="Ensure that sandbox cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f in task-service has been cleanup successfully" May 13 04:48:31.242065 kubelet[2854]: I0513 04:48:31.242034 2854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:31.244199 containerd[1588]: time="2025-05-13T04:48:31.244150017Z" level=info msg="StopPodSandbox for \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\"" May 13 04:48:31.244583 containerd[1588]: time="2025-05-13T04:48:31.244563174Z" level=info msg="Ensure that sandbox 2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1 in task-service has been cleanup successfully" May 13 04:48:31.256236 kubelet[2854]: I0513 04:48:31.256184 2854 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:31.262382 containerd[1588]: time="2025-05-13T04:48:31.261865462Z" level=info msg="StopPodSandbox for \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\"" May 13 04:48:31.262382 containerd[1588]: time="2025-05-13T04:48:31.262138429Z" level=info msg="Ensure that sandbox fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24 in task-service has been cleanup successfully" May 13 04:48:31.297771 containerd[1588]: time="2025-05-13T04:48:31.297706144Z" level=error msg="StopPodSandbox for \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\" failed" error="failed to destroy network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:31.298063 kubelet[2854]: E0513 04:48:31.298016 2854 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:31.298163 kubelet[2854]: E0513 04:48:31.298100 2854 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817"} May 13 04:48:31.298218 kubelet[2854]: E0513 04:48:31.298183 2854 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc357230-e098-4af5-9f42-e37066b7df6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:48:31.298306 kubelet[2854]: E0513 04:48:31.298216 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc357230-e098-4af5-9f42-e37066b7df6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lgz9g" podUID="bc357230-e098-4af5-9f42-e37066b7df6c" May 13 04:48:31.314018 containerd[1588]: time="2025-05-13T04:48:31.313832299Z" level=error msg="StopPodSandbox for \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\" failed" error="failed to destroy network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:31.314386 kubelet[2854]: E0513 04:48:31.314182 2854 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:31.314386 kubelet[2854]: E0513 04:48:31.314246 2854 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26"} May 13 04:48:31.314386 kubelet[2854]: E0513 04:48:31.314285 2854 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d349f609-625b-4e67-ac8a-4cb7771ba298\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:48:31.314386 kubelet[2854]: E0513 04:48:31.314313 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d349f609-625b-4e67-ac8a-4cb7771ba298\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d774d8cdb-sghzl" podUID="d349f609-625b-4e67-ac8a-4cb7771ba298" May 13 04:48:31.341201 containerd[1588]: time="2025-05-13T04:48:31.341145230Z" level=error msg="StopPodSandbox for \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\" failed" error="failed to destroy network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:31.342304 kubelet[2854]: E0513 04:48:31.341630 2854 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:31.342304 kubelet[2854]: E0513 04:48:31.341686 2854 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24"} May 13 04:48:31.342304 kubelet[2854]: E0513 04:48:31.341719 2854 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06093158-05c9-457b-b79c-f692f9759a45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:48:31.342304 kubelet[2854]: E0513 04:48:31.341747 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06093158-05c9-457b-b79c-f692f9759a45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-glr49" podUID="06093158-05c9-457b-b79c-f692f9759a45" May 13 04:48:31.343802 containerd[1588]: time="2025-05-13T04:48:31.343747130Z" level=error msg="StopPodSandbox for \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\" failed" error="failed to destroy network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:31.344061 kubelet[2854]: E0513 04:48:31.343961 2854 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:31.344243 kubelet[2854]: E0513 04:48:31.344058 2854 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f"} May 13 04:48:31.344243 kubelet[2854]: E0513 04:48:31.344112 2854 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39ecae4e-9a39-49d6-b199-431373bb0575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:48:31.344243 kubelet[2854]: E0513 04:48:31.344139 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39ecae4e-9a39-49d6-b199-431373bb0575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6546d6ff4b-rzq8d" podUID="39ecae4e-9a39-49d6-b199-431373bb0575" May 13 04:48:31.345661 containerd[1588]: time="2025-05-13T04:48:31.345540689Z" level=error msg="StopPodSandbox for \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\" failed" error="failed to destroy network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:31.345745 kubelet[2854]: E0513 04:48:31.345715 2854 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:31.345795 kubelet[2854]: E0513 04:48:31.345751 2854 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec"} May 13 04:48:31.345795 kubelet[2854]: E0513 04:48:31.345781 2854 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:48:31.345899 kubelet[2854]: E0513 04:48:31.345802 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-86s7p" podUID="9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb" May 13 04:48:31.349124 containerd[1588]: time="2025-05-13T04:48:31.349030297Z" level=error msg="StopPodSandbox for \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\" failed" error="failed to destroy network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:48:31.349490 kubelet[2854]: E0513 04:48:31.349450 2854 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:31.349548 kubelet[2854]: E0513 04:48:31.349501 2854 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1"} May 13 04:48:31.349548 kubelet[2854]: E0513 04:48:31.349532 2854 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de615ac3-0a0d-4ec1-8a3d-4e9726892ff6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:48:31.349653 kubelet[2854]: E0513 04:48:31.349556 2854 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de615ac3-0a0d-4ec1-8a3d-4e9726892ff6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6546d6ff4b-lfnfh" podUID="de615ac3-0a0d-4ec1-8a3d-4e9726892ff6" May 13 04:48:31.770395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26-shm.mount: Deactivated successfully. May 13 04:48:40.226522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801113719.mount: Deactivated successfully. May 13 04:48:40.286889 containerd[1588]: time="2025-05-13T04:48:40.286261815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:40.289210 containerd[1588]: time="2025-05-13T04:48:40.289073506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 04:48:40.291685 containerd[1588]: time="2025-05-13T04:48:40.290542729Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:40.293468 containerd[1588]: time="2025-05-13T04:48:40.293443337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:40.295243 containerd[1588]: time="2025-05-13T04:48:40.295198472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 9.070004973s" May 13 04:48:40.295338 containerd[1588]: time="2025-05-13T04:48:40.295257602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 04:48:40.345126 containerd[1588]: time="2025-05-13T04:48:40.343756965Z" level=info msg="CreateContainer within sandbox \"0ebb339ecf3605037ae7e6067df4b8850b3f7fd5e1fdc106ecf64a252e039716\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 04:48:40.390168 containerd[1588]: time="2025-05-13T04:48:40.390009449Z" level=info msg="CreateContainer within sandbox \"0ebb339ecf3605037ae7e6067df4b8850b3f7fd5e1fdc106ecf64a252e039716\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97cca52411919637b62defa5a0dc5a3bcc087e2393866b21ea1f83ce9a692013\"" May 13 04:48:40.392477 containerd[1588]: time="2025-05-13T04:48:40.391082465Z" level=info msg="StartContainer for \"97cca52411919637b62defa5a0dc5a3bcc087e2393866b21ea1f83ce9a692013\"" May 13 04:48:40.495921 containerd[1588]: time="2025-05-13T04:48:40.495642511Z" level=info msg="StartContainer for \"97cca52411919637b62defa5a0dc5a3bcc087e2393866b21ea1f83ce9a692013\" returns successfully" May 13 04:48:40.595439 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 04:48:40.595759 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 04:48:40.625608 kubelet[2854]: I0513 04:48:40.624939 2854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:48:41.336873 kubelet[2854]: I0513 04:48:41.336492 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vgmjf" podStartSLOduration=2.5345364249999998 podStartE2EDuration="27.336054213s" podCreationTimestamp="2025-05-13 04:48:14 +0000 UTC" firstStartedPulling="2025-05-13 04:48:15.494965945 +0000 UTC m=+21.662225840" lastFinishedPulling="2025-05-13 04:48:40.296483743 +0000 UTC m=+46.463743628" observedRunningTime="2025-05-13 04:48:41.332191785 +0000 UTC m=+47.499451770" watchObservedRunningTime="2025-05-13 04:48:41.336054213 +0000 UTC m=+47.503314158" May 13 04:48:41.932514 containerd[1588]: time="2025-05-13T04:48:41.932400692Z" level=info msg="StopPodSandbox for \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\"" May 13 04:48:41.935000 containerd[1588]: time="2025-05-13T04:48:41.934432914Z" level=info msg="StopPodSandbox for \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\"" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.165 [INFO][4018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.165 [INFO][4018] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" iface="eth0" netns="/var/run/netns/cni-18a7be6a-ad9f-a3bb-fdef-7f2bb10f0288" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.167 [INFO][4018] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" iface="eth0" netns="/var/run/netns/cni-18a7be6a-ad9f-a3bb-fdef-7f2bb10f0288" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.171 [INFO][4018] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" iface="eth0" netns="/var/run/netns/cni-18a7be6a-ad9f-a3bb-fdef-7f2bb10f0288" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.171 [INFO][4018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.171 [INFO][4018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.298 [INFO][4105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.299 [INFO][4105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.300 [INFO][4105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.312 [WARNING][4105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.312 [INFO][4105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.315 [INFO][4105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:42.344620 containerd[1588]: 2025-05-13 04:48:42.338 [INFO][4018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:42.348847 containerd[1588]: time="2025-05-13T04:48:42.348392962Z" level=info msg="TearDown network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\" successfully" May 13 04:48:42.349843 containerd[1588]: time="2025-05-13T04:48:42.349806783Z" level=info msg="StopPodSandbox for \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\" returns successfully" May 13 04:48:42.355240 containerd[1588]: time="2025-05-13T04:48:42.354919790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-86s7p,Uid:9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb,Namespace:kube-system,Attempt:1,}" May 13 04:48:42.360550 systemd[1]: run-netns-cni\x2d18a7be6a\x2dad9f\x2da3bb\x2dfdef\x2d7f2bb10f0288.mount: Deactivated successfully. May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.165 [INFO][4017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.167 [INFO][4017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" iface="eth0" netns="/var/run/netns/cni-b0de5c40-3fe4-697e-5342-0c551cc66432" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.167 [INFO][4017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" iface="eth0" netns="/var/run/netns/cni-b0de5c40-3fe4-697e-5342-0c551cc66432" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.171 [INFO][4017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" iface="eth0" netns="/var/run/netns/cni-b0de5c40-3fe4-697e-5342-0c551cc66432" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.171 [INFO][4017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.171 [INFO][4017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.315 [INFO][4103] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.315 [INFO][4103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.315 [INFO][4103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.342 [WARNING][4103] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.343 [INFO][4103] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.347 [INFO][4103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:42.376019 containerd[1588]: 2025-05-13 04:48:42.360 [INFO][4017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:42.380154 containerd[1588]: time="2025-05-13T04:48:42.378073737Z" level=info msg="TearDown network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\" successfully" May 13 04:48:42.380154 containerd[1588]: time="2025-05-13T04:48:42.378132857Z" level=info msg="StopPodSandbox for \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\" returns successfully" May 13 04:48:42.383219 containerd[1588]: time="2025-05-13T04:48:42.382381304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546d6ff4b-lfnfh,Uid:de615ac3-0a0d-4ec1-8a3d-4e9726892ff6,Namespace:calico-apiserver,Attempt:1,}" May 13 04:48:42.385729 systemd[1]: run-netns-cni\x2db0de5c40\x2d3fe4\x2d697e\x2d5342\x2d0c551cc66432.mount: Deactivated successfully. May 13 04:48:42.749197 kernel: bpftool[4167]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 04:48:43.194135 systemd-networkd[1217]: vxlan.calico: Link UP May 13 04:48:43.194146 systemd-networkd[1217]: vxlan.calico: Gained carrier May 13 04:48:43.716259 systemd-networkd[1217]: cali47c7f516de9: Link UP May 13 04:48:43.717186 systemd-networkd[1217]: cali47c7f516de9: Gained carrier May 13 04:48:43.753809 systemd-networkd[1217]: cali92d66e616f3: Link UP May 13 04:48:43.763609 systemd-networkd[1217]: cali92d66e616f3: Gained carrier May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.425 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0 coredns-7db6d8ff4d- kube-system 9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb 751 0 2025-05-13 04:48:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-d261562a0f.novalocal coredns-7db6d8ff4d-86s7p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali47c7f516de9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-86s7p" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.426 [INFO][4203] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-86s7p" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.570 [INFO][4226] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" HandleID="k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.585 [INFO][4226] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" HandleID="k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fe1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-d261562a0f.novalocal", "pod":"coredns-7db6d8ff4d-86s7p", "timestamp":"2025-05-13 04:48:43.57009738 +0000 UTC"}, Hostname:"ci-4081-3-3-n-d261562a0f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.585 [INFO][4226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.585 [INFO][4226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.586 [INFO][4226] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-d261562a0f.novalocal' May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.591 [INFO][4226] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.602 [INFO][4226] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.612 [INFO][4226] ipam/ipam.go 489: Trying affinity for 192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.618 [INFO][4226] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.625 [INFO][4226] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.625 [INFO][4226] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.628 [INFO][4226] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.647 [INFO][4226] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.668 [INFO][4226] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.193/26] block=192.168.47.192/26 handle="k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.668 [INFO][4226] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.193/26] handle="k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.668 [INFO][4226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:43.777780 containerd[1588]: 2025-05-13 04:48:43.668 [INFO][4226] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.193/26] IPv6=[] ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" HandleID="k8s-pod-network.26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:43.779432 containerd[1588]: 2025-05-13 04:48:43.678 [INFO][4203] cni-plugin/k8s.go 386: Populated endpoint ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-86s7p" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-86s7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47c7f516de9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:43.779432 containerd[1588]: 2025-05-13 04:48:43.679 [INFO][4203] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.193/32] ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-86s7p" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:43.779432 containerd[1588]: 2025-05-13 04:48:43.679 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47c7f516de9 ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-86s7p" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:43.779432 containerd[1588]: 2025-05-13 04:48:43.730 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-86s7p" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:43.779432 containerd[1588]: 2025-05-13 04:48:43.737 [INFO][4203] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-86s7p" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e", Pod:"coredns-7db6d8ff4d-86s7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47c7f516de9", MAC:"fe:3c:72:9f:7e:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:43.779432 containerd[1588]: 2025-05-13 04:48:43.773 [INFO][4203] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-86s7p" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.542 [INFO][4212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0 calico-apiserver-6546d6ff4b- calico-apiserver de615ac3-0a0d-4ec1-8a3d-4e9726892ff6 750 0 2025-05-13 04:48:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6546d6ff4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-d261562a0f.novalocal calico-apiserver-6546d6ff4b-lfnfh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92d66e616f3 [] []}} ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-lfnfh" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.542 [INFO][4212] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-lfnfh" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.634 [INFO][4236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" HandleID="k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.652 [INFO][4236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" HandleID="k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc0a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-d261562a0f.novalocal", "pod":"calico-apiserver-6546d6ff4b-lfnfh", "timestamp":"2025-05-13 04:48:43.634810716 +0000 UTC"}, Hostname:"ci-4081-3-3-n-d261562a0f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.652 [INFO][4236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.669 [INFO][4236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.669 [INFO][4236] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-d261562a0f.novalocal' May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.672 [INFO][4236] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.680 [INFO][4236] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.687 [INFO][4236] ipam/ipam.go 489: Trying affinity for 192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.690 [INFO][4236] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.694 [INFO][4236] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.694 [INFO][4236] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.696 [INFO][4236] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.710 [INFO][4236] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.734 [INFO][4236] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.194/26] block=192.168.47.192/26 handle="k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.735 [INFO][4236] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.194/26] handle="k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.736 [INFO][4236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:43.838015 containerd[1588]: 2025-05-13 04:48:43.736 [INFO][4236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.194/26] IPv6=[] ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" HandleID="k8s-pod-network.b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:43.838919 containerd[1588]: 2025-05-13 04:48:43.746 [INFO][4212] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-lfnfh" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0", GenerateName:"calico-apiserver-6546d6ff4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"de615ac3-0a0d-4ec1-8a3d-4e9726892ff6", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546d6ff4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"", Pod:"calico-apiserver-6546d6ff4b-lfnfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92d66e616f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:43.838919 containerd[1588]: 2025-05-13 04:48:43.746 [INFO][4212] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.194/32] ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-lfnfh" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:43.838919 containerd[1588]: 2025-05-13 04:48:43.746 [INFO][4212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92d66e616f3 ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-lfnfh" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:43.838919 containerd[1588]: 2025-05-13 04:48:43.755 [INFO][4212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-lfnfh" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:43.838919 containerd[1588]: 2025-05-13 04:48:43.755 [INFO][4212] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-lfnfh" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0", GenerateName:"calico-apiserver-6546d6ff4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"de615ac3-0a0d-4ec1-8a3d-4e9726892ff6", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546d6ff4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf", Pod:"calico-apiserver-6546d6ff4b-lfnfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92d66e616f3", MAC:"ea:80:97:f9:67:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:43.838919 containerd[1588]: 2025-05-13 04:48:43.826 [INFO][4212] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-lfnfh" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:43.869265 containerd[1588]: time="2025-05-13T04:48:43.868641622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:43.869265 containerd[1588]: time="2025-05-13T04:48:43.868744934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:43.869265 containerd[1588]: time="2025-05-13T04:48:43.868764350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:43.869265 containerd[1588]: time="2025-05-13T04:48:43.868894392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:43.921325 containerd[1588]: time="2025-05-13T04:48:43.921219495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:43.922079 containerd[1588]: time="2025-05-13T04:48:43.921667558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:43.922727 containerd[1588]: time="2025-05-13T04:48:43.922435659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:43.923223 containerd[1588]: time="2025-05-13T04:48:43.922944296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:43.936056 containerd[1588]: time="2025-05-13T04:48:43.935684082Z" level=info msg="StopPodSandbox for \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\"" May 13 04:48:43.936288 containerd[1588]: time="2025-05-13T04:48:43.936268400Z" level=info msg="StopPodSandbox for \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\"" May 13 04:48:43.937060 containerd[1588]: time="2025-05-13T04:48:43.937039185Z" level=info msg="StopPodSandbox for \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\"" May 13 04:48:43.989499 containerd[1588]: time="2025-05-13T04:48:43.989372032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-86s7p,Uid:9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e\"" May 13 04:48:44.017576 containerd[1588]: time="2025-05-13T04:48:44.017522474Z" level=info msg="CreateContainer within sandbox \"26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 04:48:44.074398 containerd[1588]: time="2025-05-13T04:48:44.074329902Z" level=info msg="CreateContainer within sandbox \"26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64e70860a8500c480802f27b759a6ce662fdaa82031d1c538c06135b9bbfa31d\"" May 13 04:48:44.084293 containerd[1588]: time="2025-05-13T04:48:44.084128227Z" level=info msg="StartContainer for \"64e70860a8500c480802f27b759a6ce662fdaa82031d1c538c06135b9bbfa31d\"" May 13 04:48:44.183651 containerd[1588]: time="2025-05-13T04:48:44.183466077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546d6ff4b-lfnfh,Uid:de615ac3-0a0d-4ec1-8a3d-4e9726892ff6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf\"" May 13 04:48:44.190747 containerd[1588]: time="2025-05-13T04:48:44.190690660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 04:48:44.241355 systemd-networkd[1217]: vxlan.calico: Gained IPv6LL May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.155 [INFO][4401] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.157 [INFO][4401] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" iface="eth0" netns="/var/run/netns/cni-18518d58-b5cb-a3da-fa57-a6381aa012f7" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.162 [INFO][4401] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" iface="eth0" netns="/var/run/netns/cni-18518d58-b5cb-a3da-fa57-a6381aa012f7" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.165 [INFO][4401] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" iface="eth0" netns="/var/run/netns/cni-18518d58-b5cb-a3da-fa57-a6381aa012f7" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.165 [INFO][4401] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.165 [INFO][4401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.257 [INFO][4450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.257 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.257 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.280 [WARNING][4450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.280 [INFO][4450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.289 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:44.300750 containerd[1588]: 2025-05-13 04:48:44.294 [INFO][4401] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:44.302324 containerd[1588]: time="2025-05-13T04:48:44.301373325Z" level=info msg="TearDown network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\" successfully" May 13 04:48:44.302324 containerd[1588]: time="2025-05-13T04:48:44.301419380Z" level=info msg="StopPodSandbox for \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\" returns successfully" May 13 04:48:44.316062 containerd[1588]: time="2025-05-13T04:48:44.315828200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgz9g,Uid:bc357230-e098-4af5-9f42-e37066b7df6c,Namespace:kube-system,Attempt:1,}" May 13 04:48:44.338393 containerd[1588]: time="2025-05-13T04:48:44.338325601Z" level=info msg="StartContainer for \"64e70860a8500c480802f27b759a6ce662fdaa82031d1c538c06135b9bbfa31d\" returns successfully" May 13 04:48:44.361018 systemd[1]: run-netns-cni\x2d18518d58\x2db5cb\x2da3da\x2dfa57\x2da6381aa012f7.mount: Deactivated successfully. May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.141 [INFO][4405] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.144 [INFO][4405] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" iface="eth0" netns="/var/run/netns/cni-1024a053-eaf1-9e1e-7773-56cbc543b817" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.144 [INFO][4405] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" iface="eth0" netns="/var/run/netns/cni-1024a053-eaf1-9e1e-7773-56cbc543b817" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.145 [INFO][4405] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" iface="eth0" netns="/var/run/netns/cni-1024a053-eaf1-9e1e-7773-56cbc543b817" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.145 [INFO][4405] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.145 [INFO][4405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.319 [INFO][4441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.319 [INFO][4441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.319 [INFO][4441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.341 [WARNING][4441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.341 [INFO][4441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.344 [INFO][4441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:44.376311 containerd[1588]: 2025-05-13 04:48:44.364 [INFO][4405] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:44.380315 containerd[1588]: time="2025-05-13T04:48:44.378114118Z" level=info msg="TearDown network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\" successfully" May 13 04:48:44.380315 containerd[1588]: time="2025-05-13T04:48:44.378157660Z" level=info msg="StopPodSandbox for \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\" returns successfully" May 13 04:48:44.384541 systemd[1]: run-netns-cni\x2d1024a053\x2deaf1\x2d9e1e\x2d7773\x2d56cbc543b817.mount: Deactivated successfully. May 13 04:48:44.397160 containerd[1588]: time="2025-05-13T04:48:44.389545704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glr49,Uid:06093158-05c9-457b-b79c-f692f9759a45,Namespace:calico-system,Attempt:1,}" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.163 [INFO][4393] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.163 [INFO][4393] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" iface="eth0" netns="/var/run/netns/cni-37a0cce3-5efa-7d0c-0324-7dba407947da" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.166 [INFO][4393] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" iface="eth0" netns="/var/run/netns/cni-37a0cce3-5efa-7d0c-0324-7dba407947da" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.172 [INFO][4393] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" iface="eth0" netns="/var/run/netns/cni-37a0cce3-5efa-7d0c-0324-7dba407947da" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.172 [INFO][4393] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.172 [INFO][4393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.319 [INFO][4459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.320 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.344 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.373 [WARNING][4459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.374 [INFO][4459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.416 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:44.440486 containerd[1588]: 2025-05-13 04:48:44.425 [INFO][4393] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:44.444927 containerd[1588]: time="2025-05-13T04:48:44.443922185Z" level=info msg="TearDown network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\" successfully" May 13 04:48:44.444927 containerd[1588]: time="2025-05-13T04:48:44.444030617Z" level=info msg="StopPodSandbox for \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\" returns successfully" May 13 04:48:44.446507 containerd[1588]: time="2025-05-13T04:48:44.446131730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546d6ff4b-rzq8d,Uid:39ecae4e-9a39-49d6-b199-431373bb0575,Namespace:calico-apiserver,Attempt:1,}" May 13 04:48:44.447161 systemd[1]: run-netns-cni\x2d37a0cce3\x2d5efa\x2d7d0c\x2d0324\x2d7dba407947da.mount: Deactivated successfully. May 13 04:48:44.702841 systemd-networkd[1217]: cali72de0bbbd6a: Link UP May 13 04:48:44.706055 systemd-networkd[1217]: cali72de0bbbd6a: Gained carrier May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.530 [INFO][4507] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0 coredns-7db6d8ff4d- kube-system bc357230-e098-4af5-9f42-e37066b7df6c 768 0 2025-05-13 04:48:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-d261562a0f.novalocal coredns-7db6d8ff4d-lgz9g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali72de0bbbd6a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgz9g" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.530 [INFO][4507] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgz9g" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.606 [INFO][4534] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" HandleID="k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.622 [INFO][4534] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" HandleID="k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000514e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-d261562a0f.novalocal", "pod":"coredns-7db6d8ff4d-lgz9g", "timestamp":"2025-05-13 04:48:44.606799248 +0000 UTC"}, Hostname:"ci-4081-3-3-n-d261562a0f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.622 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.622 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.622 [INFO][4534] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-d261562a0f.novalocal' May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.626 [INFO][4534] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.636 [INFO][4534] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.647 [INFO][4534] ipam/ipam.go 489: Trying affinity for 192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.651 [INFO][4534] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.654 [INFO][4534] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.654 [INFO][4534] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.658 [INFO][4534] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.668 [INFO][4534] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.683 [INFO][4534] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.195/26] block=192.168.47.192/26 handle="k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.683 [INFO][4534] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.195/26] handle="k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.683 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:44.733501 containerd[1588]: 2025-05-13 04:48:44.683 [INFO][4534] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.195/26] IPv6=[] ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" HandleID="k8s-pod-network.32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.739726 containerd[1588]: 2025-05-13 04:48:44.689 [INFO][4507] cni-plugin/k8s.go 386: Populated endpoint ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgz9g" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc357230-e098-4af5-9f42-e37066b7df6c", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-lgz9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72de0bbbd6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:44.739726 containerd[1588]: 2025-05-13 04:48:44.689 [INFO][4507] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.195/32] ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgz9g" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.739726 containerd[1588]: 2025-05-13 04:48:44.689 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72de0bbbd6a ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgz9g" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.739726 containerd[1588]: 2025-05-13 04:48:44.705 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgz9g" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.739726 containerd[1588]: 2025-05-13 04:48:44.707 [INFO][4507] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgz9g" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc357230-e098-4af5-9f42-e37066b7df6c", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb", Pod:"coredns-7db6d8ff4d-lgz9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72de0bbbd6a", MAC:"56:67:44:61:cd:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:44.739726 containerd[1588]: 2025-05-13 04:48:44.726 [INFO][4507] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgz9g" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:44.790256 containerd[1588]: time="2025-05-13T04:48:44.790088188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:44.790256 containerd[1588]: time="2025-05-13T04:48:44.790160272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:44.790256 containerd[1588]: time="2025-05-13T04:48:44.790215104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:44.792687 containerd[1588]: time="2025-05-13T04:48:44.790370143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:44.869017 systemd-networkd[1217]: calidb6608103eb: Link UP May 13 04:48:44.881910 systemd-networkd[1217]: calidb6608103eb: Gained carrier May 13 04:48:44.887856 containerd[1588]: time="2025-05-13T04:48:44.886295008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgz9g,Uid:bc357230-e098-4af5-9f42-e37066b7df6c,Namespace:kube-system,Attempt:1,} returns sandbox id \"32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb\"" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.564 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0 csi-node-driver- calico-system 06093158-05c9-457b-b79c-f692f9759a45 767 0 2025-05-13 04:48:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-d261562a0f.novalocal csi-node-driver-glr49 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidb6608103eb [] []}} ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Namespace="calico-system" Pod="csi-node-driver-glr49" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.565 [INFO][4514] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Namespace="calico-system" Pod="csi-node-driver-glr49" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.655 [INFO][4544] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" HandleID="k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.681 [INFO][4544] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" HandleID="k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038bc70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-d261562a0f.novalocal", "pod":"csi-node-driver-glr49", "timestamp":"2025-05-13 04:48:44.655705944 +0000 UTC"}, Hostname:"ci-4081-3-3-n-d261562a0f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.685 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.686 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.686 [INFO][4544] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-d261562a0f.novalocal' May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.691 [INFO][4544] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.716 [INFO][4544] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.732 [INFO][4544] ipam/ipam.go 489: Trying affinity for 192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.740 [INFO][4544] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.748 [INFO][4544] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.748 [INFO][4544] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.753 [INFO][4544] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55 May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.761 [INFO][4544] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.803 [INFO][4544] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.196/26] block=192.168.47.192/26 handle="k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.813 [INFO][4544] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.196/26] handle="k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.818 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:44.931808 containerd[1588]: 2025-05-13 04:48:44.823 [INFO][4544] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.196/26] IPv6=[] ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" HandleID="k8s-pod-network.dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.932727 containerd[1588]: 2025-05-13 04:48:44.836 [INFO][4514] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Namespace="calico-system" Pod="csi-node-driver-glr49" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06093158-05c9-457b-b79c-f692f9759a45", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"", Pod:"csi-node-driver-glr49", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb6608103eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:44.932727 containerd[1588]: 2025-05-13 04:48:44.837 [INFO][4514] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.196/32] ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Namespace="calico-system" Pod="csi-node-driver-glr49" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.932727 containerd[1588]: 2025-05-13 04:48:44.837 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb6608103eb ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Namespace="calico-system" Pod="csi-node-driver-glr49" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.932727 containerd[1588]: 2025-05-13 04:48:44.885 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Namespace="calico-system" Pod="csi-node-driver-glr49" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.932727 containerd[1588]: 2025-05-13 04:48:44.891 [INFO][4514] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Namespace="calico-system" Pod="csi-node-driver-glr49" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06093158-05c9-457b-b79c-f692f9759a45", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55", Pod:"csi-node-driver-glr49", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb6608103eb", MAC:"1a:98:a1:0f:c1:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:44.932727 containerd[1588]: 2025-05-13 04:48:44.922 [INFO][4514] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55" Namespace="calico-system" Pod="csi-node-driver-glr49" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:44.934263 containerd[1588]: time="2025-05-13T04:48:44.932696950Z" level=info msg="CreateContainer within sandbox \"32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 04:48:44.949470 systemd-journald[1119]: Under memory pressure, flushing caches. May 13 04:48:44.948098 systemd-resolved[1473]: Under memory pressure, flushing caches. May 13 04:48:44.948168 systemd-resolved[1473]: Flushed all caches. May 13 04:48:44.998953 systemd-networkd[1217]: cali8f250c2ad2a: Link UP May 13 04:48:45.001459 systemd-networkd[1217]: cali8f250c2ad2a: Gained carrier May 13 04:48:45.019392 containerd[1588]: time="2025-05-13T04:48:45.018170110Z" level=info msg="CreateContainer within sandbox \"32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01680f9c9a8b0c6aa58db9f793777e313dcba599ba621284e6d6c85b178440ee\"" May 13 04:48:45.022603 containerd[1588]: time="2025-05-13T04:48:45.022207680Z" level=info msg="StartContainer for \"01680f9c9a8b0c6aa58db9f793777e313dcba599ba621284e6d6c85b178440ee\"" May 13 04:48:45.027128 containerd[1588]: time="2025-05-13T04:48:45.026278451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:45.027128 containerd[1588]: time="2025-05-13T04:48:45.026414624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:45.027128 containerd[1588]: time="2025-05-13T04:48:45.026434561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:45.027128 containerd[1588]: time="2025-05-13T04:48:45.026557190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.592 [INFO][4523] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0 calico-apiserver-6546d6ff4b- calico-apiserver 39ecae4e-9a39-49d6-b199-431373bb0575 769 0 2025-05-13 04:48:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6546d6ff4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-d261562a0f.novalocal calico-apiserver-6546d6ff4b-rzq8d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f250c2ad2a [] []}} ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-rzq8d" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.592 [INFO][4523] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-rzq8d" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.669 [INFO][4551] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" HandleID="k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.691 [INFO][4551] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" HandleID="k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-d261562a0f.novalocal", "pod":"calico-apiserver-6546d6ff4b-rzq8d", "timestamp":"2025-05-13 04:48:44.669437262 +0000 UTC"}, Hostname:"ci-4081-3-3-n-d261562a0f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.691 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.818 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.818 [INFO][4551] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-d261562a0f.novalocal' May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.833 [INFO][4551] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.853 [INFO][4551] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.889 [INFO][4551] ipam/ipam.go 489: Trying affinity for 192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.897 [INFO][4551] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.905 [INFO][4551] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.905 [INFO][4551] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.922 [INFO][4551] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.941 [INFO][4551] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.965 [INFO][4551] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.197/26] block=192.168.47.192/26 handle="k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.965 [INFO][4551] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.197/26] handle="k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.965 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:45.040561 containerd[1588]: 2025-05-13 04:48:44.965 [INFO][4551] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.197/26] IPv6=[] ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" HandleID="k8s-pod-network.1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:45.042353 containerd[1588]: 2025-05-13 04:48:44.980 [INFO][4523] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-rzq8d" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0", GenerateName:"calico-apiserver-6546d6ff4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"39ecae4e-9a39-49d6-b199-431373bb0575", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546d6ff4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"", Pod:"calico-apiserver-6546d6ff4b-rzq8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f250c2ad2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:45.042353 containerd[1588]: 2025-05-13 04:48:44.981 [INFO][4523] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.197/32] ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-rzq8d" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:45.042353 containerd[1588]: 2025-05-13 04:48:44.981 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f250c2ad2a ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-rzq8d" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:45.042353 containerd[1588]: 2025-05-13 04:48:45.001 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-rzq8d" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:45.042353 containerd[1588]: 2025-05-13 04:48:45.002 [INFO][4523] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-rzq8d" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0", GenerateName:"calico-apiserver-6546d6ff4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"39ecae4e-9a39-49d6-b199-431373bb0575", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546d6ff4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d", Pod:"calico-apiserver-6546d6ff4b-rzq8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f250c2ad2a", MAC:"8e:79:fc:07:fd:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:45.042353 containerd[1588]: 2025-05-13 04:48:45.035 [INFO][4523] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d" Namespace="calico-apiserver" Pod="calico-apiserver-6546d6ff4b-rzq8d" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:45.138221 containerd[1588]: time="2025-05-13T04:48:45.136843525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glr49,Uid:06093158-05c9-457b-b79c-f692f9759a45,Namespace:calico-system,Attempt:1,} returns sandbox id \"dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55\"" May 13 04:48:45.158404 containerd[1588]: time="2025-05-13T04:48:45.158018411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:45.158404 containerd[1588]: time="2025-05-13T04:48:45.158304684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:45.158404 containerd[1588]: time="2025-05-13T04:48:45.158380285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:45.159441 containerd[1588]: time="2025-05-13T04:48:45.159352986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:45.187651 containerd[1588]: time="2025-05-13T04:48:45.187576680Z" level=info msg="StartContainer for \"01680f9c9a8b0c6aa58db9f793777e313dcba599ba621284e6d6c85b178440ee\" returns successfully" May 13 04:48:45.318800 containerd[1588]: time="2025-05-13T04:48:45.318378724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546d6ff4b-rzq8d,Uid:39ecae4e-9a39-49d6-b199-431373bb0575,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d\"" May 13 04:48:45.394575 systemd-networkd[1217]: cali92d66e616f3: Gained IPv6LL May 13 04:48:45.457499 systemd-networkd[1217]: cali47c7f516de9: Gained IPv6LL May 13 04:48:45.483235 kubelet[2854]: I0513 04:48:45.481340 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lgz9g" podStartSLOduration=38.481315184 podStartE2EDuration="38.481315184s" podCreationTimestamp="2025-05-13 04:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:48:45.479684437 +0000 UTC m=+51.646944322" watchObservedRunningTime="2025-05-13 04:48:45.481315184 +0000 UTC m=+51.648575069" May 13 04:48:45.498388 kubelet[2854]: I0513 04:48:45.498121 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-86s7p" podStartSLOduration=38.49809814 podStartE2EDuration="38.49809814s" podCreationTimestamp="2025-05-13 04:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:48:45.497863623 +0000 UTC m=+51.665123518" watchObservedRunningTime="2025-05-13 04:48:45.49809814 +0000 UTC m=+51.665358035" May 13 04:48:45.931213 containerd[1588]: time="2025-05-13T04:48:45.931091760Z" level=info msg="StopPodSandbox for \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\"" May 13 04:48:45.971579 systemd-networkd[1217]: cali72de0bbbd6a: Gained IPv6LL May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.043 [INFO][4788] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.045 [INFO][4788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" iface="eth0" netns="/var/run/netns/cni-857cd298-ee07-b558-3068-6f3aecfa682f" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.045 [INFO][4788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" iface="eth0" netns="/var/run/netns/cni-857cd298-ee07-b558-3068-6f3aecfa682f" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.046 [INFO][4788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" iface="eth0" netns="/var/run/netns/cni-857cd298-ee07-b558-3068-6f3aecfa682f" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.046 [INFO][4788] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.046 [INFO][4788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.090 [INFO][4795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.091 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.091 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.100 [WARNING][4795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.100 [INFO][4795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.102 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:46.105525 containerd[1588]: 2025-05-13 04:48:46.104 [INFO][4788] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:46.107233 containerd[1588]: time="2025-05-13T04:48:46.106146946Z" level=info msg="TearDown network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\" successfully" May 13 04:48:46.107233 containerd[1588]: time="2025-05-13T04:48:46.106188804Z" level=info msg="StopPodSandbox for \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\" returns successfully" May 13 04:48:46.107233 containerd[1588]: time="2025-05-13T04:48:46.106938190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d774d8cdb-sghzl,Uid:d349f609-625b-4e67-ac8a-4cb7771ba298,Namespace:calico-system,Attempt:1,}" May 13 04:48:46.112312 systemd[1]: run-netns-cni\x2d857cd298\x2dee07\x2db558\x2d3068\x2d6f3aecfa682f.mount: Deactivated successfully. May 13 04:48:46.358677 systemd-networkd[1217]: cali3fd3af8e870: Link UP May 13 04:48:46.358914 systemd-networkd[1217]: cali3fd3af8e870: Gained carrier May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.216 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0 calico-kube-controllers-6d774d8cdb- calico-system d349f609-625b-4e67-ac8a-4cb7771ba298 799 0 2025-05-13 04:48:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d774d8cdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-d261562a0f.novalocal calico-kube-controllers-6d774d8cdb-sghzl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3fd3af8e870 [] []}} ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Namespace="calico-system" Pod="calico-kube-controllers-6d774d8cdb-sghzl" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.217 [INFO][4804] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Namespace="calico-system" Pod="calico-kube-controllers-6d774d8cdb-sghzl" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.261 [INFO][4815] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" HandleID="k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.287 [INFO][4815] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" HandleID="k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000420000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-d261562a0f.novalocal", "pod":"calico-kube-controllers-6d774d8cdb-sghzl", "timestamp":"2025-05-13 04:48:46.261272392 +0000 UTC"}, Hostname:"ci-4081-3-3-n-d261562a0f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.288 [INFO][4815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.288 [INFO][4815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.288 [INFO][4815] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-d261562a0f.novalocal' May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.291 [INFO][4815] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.297 [INFO][4815] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.304 [INFO][4815] ipam/ipam.go 489: Trying affinity for 192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.309 [INFO][4815] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.312 [INFO][4815] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.312 [INFO][4815] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.315 [INFO][4815] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3 May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.325 [INFO][4815] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.341 [INFO][4815] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.198/26] block=192.168.47.192/26 handle="k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.342 [INFO][4815] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.198/26] handle="k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" host="ci-4081-3-3-n-d261562a0f.novalocal" May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.342 [INFO][4815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:46.395167 containerd[1588]: 2025-05-13 04:48:46.343 [INFO][4815] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.198/26] IPv6=[] ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" HandleID="k8s-pod-network.ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.395943 containerd[1588]: 2025-05-13 04:48:46.351 [INFO][4804] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Namespace="calico-system" Pod="calico-kube-controllers-6d774d8cdb-sghzl" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0", GenerateName:"calico-kube-controllers-6d774d8cdb-", Namespace:"calico-system", SelfLink:"", UID:"d349f609-625b-4e67-ac8a-4cb7771ba298", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d774d8cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6d774d8cdb-sghzl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fd3af8e870", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:46.395943 containerd[1588]: 2025-05-13 04:48:46.351 [INFO][4804] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.198/32] ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Namespace="calico-system" Pod="calico-kube-controllers-6d774d8cdb-sghzl" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.395943 containerd[1588]: 2025-05-13 04:48:46.351 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fd3af8e870 ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Namespace="calico-system" Pod="calico-kube-controllers-6d774d8cdb-sghzl" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.395943 containerd[1588]: 2025-05-13 04:48:46.359 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Namespace="calico-system" Pod="calico-kube-controllers-6d774d8cdb-sghzl" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.395943 containerd[1588]: 2025-05-13 04:48:46.363 [INFO][4804] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Namespace="calico-system" Pod="calico-kube-controllers-6d774d8cdb-sghzl" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0", GenerateName:"calico-kube-controllers-6d774d8cdb-", Namespace:"calico-system", SelfLink:"", UID:"d349f609-625b-4e67-ac8a-4cb7771ba298", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d774d8cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3", Pod:"calico-kube-controllers-6d774d8cdb-sghzl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fd3af8e870", MAC:"92:32:ca:51:50:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:46.395943 containerd[1588]: 2025-05-13 04:48:46.386 [INFO][4804] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3" Namespace="calico-system" Pod="calico-kube-controllers-6d774d8cdb-sghzl" WorkloadEndpoint="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:46.449496 containerd[1588]: time="2025-05-13T04:48:46.449081278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:48:46.449496 containerd[1588]: time="2025-05-13T04:48:46.449177708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:48:46.449496 containerd[1588]: time="2025-05-13T04:48:46.449219705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:46.449496 containerd[1588]: time="2025-05-13T04:48:46.449386176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:48:46.628534 containerd[1588]: time="2025-05-13T04:48:46.628393346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d774d8cdb-sghzl,Uid:d349f609-625b-4e67-ac8a-4cb7771ba298,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3\"" May 13 04:48:46.866122 systemd-networkd[1217]: calidb6608103eb: Gained IPv6LL May 13 04:48:46.866513 systemd-networkd[1217]: cali8f250c2ad2a: Gained IPv6LL May 13 04:48:47.633749 systemd-networkd[1217]: cali3fd3af8e870: Gained IPv6LL May 13 04:48:48.417333 containerd[1588]: time="2025-05-13T04:48:48.417182973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:48.437896 containerd[1588]: time="2025-05-13T04:48:48.433604453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 04:48:48.443328 containerd[1588]: time="2025-05-13T04:48:48.443153346Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:48.446195 containerd[1588]: time="2025-05-13T04:48:48.446127188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:48.447203 containerd[1588]: time="2025-05-13T04:48:48.447142579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.256051473s" May 13 04:48:48.447203 containerd[1588]: time="2025-05-13T04:48:48.447200457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 04:48:48.451031 containerd[1588]: time="2025-05-13T04:48:48.450955674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 04:48:48.453604 containerd[1588]: time="2025-05-13T04:48:48.453556170Z" level=info msg="CreateContainer within sandbox \"b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 04:48:48.479005 containerd[1588]: time="2025-05-13T04:48:48.478557077Z" level=info msg="CreateContainer within sandbox \"b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"60b8327cf03df88f449281ebf58d1cdff6c904837555a759e99e7d9c3501cb55\"" May 13 04:48:48.480821 containerd[1588]: time="2025-05-13T04:48:48.479437346Z" level=info msg="StartContainer for \"60b8327cf03df88f449281ebf58d1cdff6c904837555a759e99e7d9c3501cb55\"" May 13 04:48:48.595112 containerd[1588]: time="2025-05-13T04:48:48.595049374Z" level=info msg="StartContainer for \"60b8327cf03df88f449281ebf58d1cdff6c904837555a759e99e7d9c3501cb55\" returns successfully" May 13 04:48:50.105562 kubelet[2854]: I0513 04:48:50.104705 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6546d6ff4b-lfnfh" podStartSLOduration=31.843320304 podStartE2EDuration="36.104213527s" podCreationTimestamp="2025-05-13 04:48:14 +0000 UTC" firstStartedPulling="2025-05-13 04:48:44.188431202 +0000 UTC m=+50.355691087" lastFinishedPulling="2025-05-13 04:48:48.449324405 +0000 UTC m=+54.616584310" observedRunningTime="2025-05-13 04:48:49.512829717 +0000 UTC m=+55.680089612" watchObservedRunningTime="2025-05-13 04:48:50.104213527 +0000 UTC m=+56.271473462" May 13 04:48:50.800887 containerd[1588]: time="2025-05-13T04:48:50.800798686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:50.802701 containerd[1588]: time="2025-05-13T04:48:50.802031594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 04:48:50.803399 containerd[1588]: time="2025-05-13T04:48:50.803363196Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:50.807572 containerd[1588]: time="2025-05-13T04:48:50.807484768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:50.808850 containerd[1588]: time="2025-05-13T04:48:50.808335734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.357315911s" May 13 04:48:50.808850 containerd[1588]: time="2025-05-13T04:48:50.808402038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 04:48:50.809950 containerd[1588]: time="2025-05-13T04:48:50.809918996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 04:48:50.813237 containerd[1588]: time="2025-05-13T04:48:50.813186356Z" level=info msg="CreateContainer within sandbox \"dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 04:48:50.841546 containerd[1588]: time="2025-05-13T04:48:50.841478532Z" level=info msg="CreateContainer within sandbox \"dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"96e74269e527409ace73566df82adc0e032c3488dadd833648a95a106c3fb620\"" May 13 04:48:50.842339 containerd[1588]: time="2025-05-13T04:48:50.842168989Z" level=info msg="StartContainer for \"96e74269e527409ace73566df82adc0e032c3488dadd833648a95a106c3fb620\"" May 13 04:48:50.935655 containerd[1588]: time="2025-05-13T04:48:50.935605853Z" level=info msg="StartContainer for \"96e74269e527409ace73566df82adc0e032c3488dadd833648a95a106c3fb620\" returns successfully" May 13 04:48:51.338602 containerd[1588]: time="2025-05-13T04:48:51.338328507Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:51.340141 containerd[1588]: time="2025-05-13T04:48:51.339522753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 04:48:51.348958 containerd[1588]: time="2025-05-13T04:48:51.348849701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 538.858521ms" May 13 04:48:51.348958 containerd[1588]: time="2025-05-13T04:48:51.348935370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 04:48:51.355339 containerd[1588]: time="2025-05-13T04:48:51.353774482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 04:48:51.367056 containerd[1588]: time="2025-05-13T04:48:51.366912886Z" level=info msg="CreateContainer within sandbox \"1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 04:48:51.401551 containerd[1588]: time="2025-05-13T04:48:51.399762947Z" level=info msg="CreateContainer within sandbox \"1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b9a3fe838a13d000cbbc24a7369d727e8c175c91c1796fd26b5fabe27f07e3c\"" May 13 04:48:51.405249 containerd[1588]: time="2025-05-13T04:48:51.404903601Z" level=info msg="StartContainer for \"7b9a3fe838a13d000cbbc24a7369d727e8c175c91c1796fd26b5fabe27f07e3c\"" May 13 04:48:51.527661 containerd[1588]: time="2025-05-13T04:48:51.526848458Z" level=info msg="StartContainer for \"7b9a3fe838a13d000cbbc24a7369d727e8c175c91c1796fd26b5fabe27f07e3c\" returns successfully" May 13 04:48:52.577114 kubelet[2854]: I0513 04:48:52.575914 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6546d6ff4b-rzq8d" podStartSLOduration=32.54466792 podStartE2EDuration="38.574336036s" podCreationTimestamp="2025-05-13 04:48:14 +0000 UTC" firstStartedPulling="2025-05-13 04:48:45.321264017 +0000 UTC m=+51.488523902" lastFinishedPulling="2025-05-13 04:48:51.350932083 +0000 UTC m=+57.518192018" observedRunningTime="2025-05-13 04:48:52.557500455 +0000 UTC m=+58.724760440" watchObservedRunningTime="2025-05-13 04:48:52.574336036 +0000 UTC m=+58.741595971" May 13 04:48:53.537178 kubelet[2854]: I0513 04:48:53.535910 2854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:48:53.942468 containerd[1588]: time="2025-05-13T04:48:53.942352414Z" level=info msg="StopPodSandbox for \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\"" May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.044 [WARNING][5028] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0", GenerateName:"calico-apiserver-6546d6ff4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"de615ac3-0a0d-4ec1-8a3d-4e9726892ff6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546d6ff4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf", Pod:"calico-apiserver-6546d6ff4b-lfnfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92d66e616f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.045 [INFO][5028] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.045 [INFO][5028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" iface="eth0" netns="" May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.045 [INFO][5028] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.045 [INFO][5028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.105 [INFO][5035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.106 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.106 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.118 [WARNING][5035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.118 [INFO][5035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.121 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:54.128012 containerd[1588]: 2025-05-13 04:48:54.123 [INFO][5028] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:54.128012 containerd[1588]: time="2025-05-13T04:48:54.127680710Z" level=info msg="TearDown network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\" successfully" May 13 04:48:54.128012 containerd[1588]: time="2025-05-13T04:48:54.127718902Z" level=info msg="StopPodSandbox for \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\" returns successfully" May 13 04:48:54.131212 containerd[1588]: time="2025-05-13T04:48:54.130782525Z" level=info msg="RemovePodSandbox for \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\"" May 13 04:48:54.131212 containerd[1588]: time="2025-05-13T04:48:54.130831226Z" level=info msg="Forcibly stopping sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\"" May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.197 [WARNING][5053] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0", GenerateName:"calico-apiserver-6546d6ff4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"de615ac3-0a0d-4ec1-8a3d-4e9726892ff6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546d6ff4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"b360ba2cd68a5c892812e4386535b9e23b04e0c1603cadbca930addd9897dedf", Pod:"calico-apiserver-6546d6ff4b-lfnfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92d66e616f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.197 [INFO][5053] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.197 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" iface="eth0" netns="" May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.197 [INFO][5053] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.197 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.222 [INFO][5061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.222 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.222 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.229 [WARNING][5061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.229 [INFO][5061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" HandleID="k8s-pod-network.2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--lfnfh-eth0" May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.231 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:54.233950 containerd[1588]: 2025-05-13 04:48:54.232 [INFO][5053] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1" May 13 04:48:54.234682 containerd[1588]: time="2025-05-13T04:48:54.234009973Z" level=info msg="TearDown network for sandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\" successfully" May 13 04:48:54.335639 containerd[1588]: time="2025-05-13T04:48:54.335348408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:48:54.335639 containerd[1588]: time="2025-05-13T04:48:54.335468381Z" level=info msg="RemovePodSandbox \"2c6fca743831b9e46b51f51db88bed765df1edfa6aaf76afcea3563c46e2eca1\" returns successfully" May 13 04:48:54.339164 containerd[1588]: time="2025-05-13T04:48:54.339087582Z" level=info msg="StopPodSandbox for \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\"" May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.421 [WARNING][5083] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0", GenerateName:"calico-kube-controllers-6d774d8cdb-", Namespace:"calico-system", SelfLink:"", UID:"d349f609-625b-4e67-ac8a-4cb7771ba298", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d774d8cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3", Pod:"calico-kube-controllers-6d774d8cdb-sghzl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fd3af8e870", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.421 [INFO][5083] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.421 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" iface="eth0" netns="" May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.421 [INFO][5083] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.421 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.474 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.474 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.476 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.486 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.486 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.489 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:54.494286 containerd[1588]: 2025-05-13 04:48:54.491 [INFO][5083] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:54.496193 containerd[1588]: time="2025-05-13T04:48:54.494389303Z" level=info msg="TearDown network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\" successfully" May 13 04:48:54.496193 containerd[1588]: time="2025-05-13T04:48:54.494431521Z" level=info msg="StopPodSandbox for \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\" returns successfully" May 13 04:48:54.496547 containerd[1588]: time="2025-05-13T04:48:54.496369495Z" level=info msg="RemovePodSandbox for \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\"" May 13 04:48:54.496547 containerd[1588]: time="2025-05-13T04:48:54.496399983Z" level=info msg="Forcibly stopping sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\"" May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.612 [WARNING][5108] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0", GenerateName:"calico-kube-controllers-6d774d8cdb-", Namespace:"calico-system", SelfLink:"", UID:"d349f609-625b-4e67-ac8a-4cb7771ba298", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d774d8cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3", Pod:"calico-kube-controllers-6d774d8cdb-sghzl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fd3af8e870", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.614 [INFO][5108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.614 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" iface="eth0" netns="" May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.614 [INFO][5108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.614 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.647 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.647 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.647 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.655 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.655 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" HandleID="k8s-pod-network.ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--kube--controllers--6d774d8cdb--sghzl-eth0" May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.657 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:54.663270 containerd[1588]: 2025-05-13 04:48:54.660 [INFO][5108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26" May 13 04:48:54.663270 containerd[1588]: time="2025-05-13T04:48:54.663222841Z" level=info msg="TearDown network for sandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\" successfully" May 13 04:48:54.669426 containerd[1588]: time="2025-05-13T04:48:54.668509801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:48:54.669426 containerd[1588]: time="2025-05-13T04:48:54.668572739Z" level=info msg="RemovePodSandbox \"ff19fd9ef7b40cefc1dcf0d5962223fcbc6b2c0011aaf7e95b31a75da9ad3a26\" returns successfully" May 13 04:48:54.670576 containerd[1588]: time="2025-05-13T04:48:54.670232774Z" level=info msg="StopPodSandbox for \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\"" May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.732 [WARNING][5133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06093158-05c9-457b-b79c-f692f9759a45", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55", Pod:"csi-node-driver-glr49", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb6608103eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.733 [INFO][5133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.733 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" iface="eth0" netns="" May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.733 [INFO][5133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.733 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.760 [INFO][5141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.760 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.760 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.770 [WARNING][5141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.770 [INFO][5141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.773 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:54.778468 containerd[1588]: 2025-05-13 04:48:54.775 [INFO][5133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:54.779366 containerd[1588]: time="2025-05-13T04:48:54.779102225Z" level=info msg="TearDown network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\" successfully" May 13 04:48:54.779366 containerd[1588]: time="2025-05-13T04:48:54.779135878Z" level=info msg="StopPodSandbox for \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\" returns successfully" May 13 04:48:54.780748 containerd[1588]: time="2025-05-13T04:48:54.780281876Z" level=info msg="RemovePodSandbox for \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\"" May 13 04:48:54.780748 containerd[1588]: time="2025-05-13T04:48:54.780317392Z" level=info msg="Forcibly stopping sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\"" May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.851 [WARNING][5159] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06093158-05c9-457b-b79c-f692f9759a45", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55", Pod:"csi-node-driver-glr49", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb6608103eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.851 [INFO][5159] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.852 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" iface="eth0" netns="" May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.852 [INFO][5159] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.852 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.886 [INFO][5167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.886 [INFO][5167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.887 [INFO][5167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.896 [WARNING][5167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.898 [INFO][5167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" HandleID="k8s-pod-network.fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-csi--node--driver--glr49-eth0" May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.901 [INFO][5167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:54.905733 containerd[1588]: 2025-05-13 04:48:54.904 [INFO][5159] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24" May 13 04:48:54.906473 containerd[1588]: time="2025-05-13T04:48:54.905787915Z" level=info msg="TearDown network for sandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\" successfully" May 13 04:48:55.380644 containerd[1588]: time="2025-05-13T04:48:55.380102407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:48:55.380644 containerd[1588]: time="2025-05-13T04:48:55.380263918Z" level=info msg="RemovePodSandbox \"fb89a391926839db2b9474f625108c41ff5184fbdbc1b79493ea187b4f97dd24\" returns successfully" May 13 04:48:55.384471 containerd[1588]: time="2025-05-13T04:48:55.381966655Z" level=info msg="StopPodSandbox for \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\"" May 13 04:48:55.399218 containerd[1588]: time="2025-05-13T04:48:55.398968850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:55.400473 containerd[1588]: time="2025-05-13T04:48:55.400393537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 04:48:55.402886 containerd[1588]: time="2025-05-13T04:48:55.402767455Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:55.412723 containerd[1588]: time="2025-05-13T04:48:55.412294196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:55.414431 containerd[1588]: time="2025-05-13T04:48:55.414275582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.06043651s" May 13 04:48:55.414431 containerd[1588]: time="2025-05-13T04:48:55.414314293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 04:48:55.421354 containerd[1588]: time="2025-05-13T04:48:55.421276069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 04:48:55.449638 containerd[1588]: time="2025-05-13T04:48:55.449015483Z" level=info msg="CreateContainer within sandbox \"ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 04:48:55.487208 containerd[1588]: time="2025-05-13T04:48:55.487147953Z" level=info msg="CreateContainer within sandbox \"ba94921a7fe5764dae6da2527d39de003637e86c8d590af153c13890d2cccbf3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"872310e87174566962965361e88b982905b1bad34c06e0e53cbb82af3e7521b6\"" May 13 04:48:55.491031 containerd[1588]: time="2025-05-13T04:48:55.490991543Z" level=info msg="StartContainer for \"872310e87174566962965361e88b982905b1bad34c06e0e53cbb82af3e7521b6\"" May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.529 [WARNING][5185] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0", GenerateName:"calico-apiserver-6546d6ff4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"39ecae4e-9a39-49d6-b199-431373bb0575", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546d6ff4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d", Pod:"calico-apiserver-6546d6ff4b-rzq8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f250c2ad2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.529 [INFO][5185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.529 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" iface="eth0" netns="" May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.529 [INFO][5185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.529 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.567 [INFO][5226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.568 [INFO][5226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.568 [INFO][5226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.576 [WARNING][5226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.577 [INFO][5226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.582 [INFO][5226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:55.591774 containerd[1588]: 2025-05-13 04:48:55.590 [INFO][5185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:55.592908 containerd[1588]: time="2025-05-13T04:48:55.591745673Z" level=info msg="TearDown network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\" successfully" May 13 04:48:55.592908 containerd[1588]: time="2025-05-13T04:48:55.592310336Z" level=info msg="StopPodSandbox for \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\" returns successfully" May 13 04:48:55.593847 containerd[1588]: time="2025-05-13T04:48:55.593566760Z" level=info msg="RemovePodSandbox for \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\"" May 13 04:48:55.593847 containerd[1588]: time="2025-05-13T04:48:55.593597387Z" level=info msg="Forcibly stopping sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\"" May 13 04:48:55.686921 containerd[1588]: time="2025-05-13T04:48:55.685759271Z" level=info msg="StartContainer for \"872310e87174566962965361e88b982905b1bad34c06e0e53cbb82af3e7521b6\" returns successfully" May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.671 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0", GenerateName:"calico-apiserver-6546d6ff4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"39ecae4e-9a39-49d6-b199-431373bb0575", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546d6ff4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"1141f860f22c04866414b3566480fa762b884fc87a66606cae1050e997e5a34d", Pod:"calico-apiserver-6546d6ff4b-rzq8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f250c2ad2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.672 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.673 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" iface="eth0" netns="" May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.673 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.673 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.738 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.738 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.738 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.748 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.748 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" HandleID="k8s-pod-network.cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-calico--apiserver--6546d6ff4b--rzq8d-eth0" May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.750 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:55.755050 containerd[1588]: 2025-05-13 04:48:55.751 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f" May 13 04:48:55.755604 containerd[1588]: time="2025-05-13T04:48:55.755090727Z" level=info msg="TearDown network for sandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\" successfully" May 13 04:48:55.759679 containerd[1588]: time="2025-05-13T04:48:55.759619725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:48:55.759853 containerd[1588]: time="2025-05-13T04:48:55.759757973Z" level=info msg="RemovePodSandbox \"cf003c3403e59ff6143bb8290062c8005be6fe616c661ca6404498aa6b0b430f\" returns successfully" May 13 04:48:55.760752 containerd[1588]: time="2025-05-13T04:48:55.760727130Z" level=info msg="StopPodSandbox for \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\"" May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.820 [WARNING][5318] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e", Pod:"coredns-7db6d8ff4d-86s7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47c7f516de9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.821 [INFO][5318] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.821 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" iface="eth0" netns="" May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.821 [INFO][5318] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.821 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.848 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.848 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.848 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.856 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.856 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.858 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:55.861286 containerd[1588]: 2025-05-13 04:48:55.860 [INFO][5318] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:55.862135 containerd[1588]: time="2025-05-13T04:48:55.861930760Z" level=info msg="TearDown network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\" successfully" May 13 04:48:55.862135 containerd[1588]: time="2025-05-13T04:48:55.861964924Z" level=info msg="StopPodSandbox for \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\" returns successfully" May 13 04:48:55.862600 containerd[1588]: time="2025-05-13T04:48:55.862536239Z" level=info msg="RemovePodSandbox for \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\"" May 13 04:48:55.862600 containerd[1588]: time="2025-05-13T04:48:55.862582765Z" level=info msg="Forcibly stopping sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\"" May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.901 [WARNING][5343] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d3536c9-92ad-4cae-9fd7-bb9fd598e9bb", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"26c5b0507b914dc06d54b4cfaad3c405efea3ae6dfcab9d0ab759d17b18b239e", Pod:"coredns-7db6d8ff4d-86s7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47c7f516de9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.902 [INFO][5343] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.902 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" iface="eth0" netns="" May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.902 [INFO][5343] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.902 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.926 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.926 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.926 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.938 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.939 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" HandleID="k8s-pod-network.947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--86s7p-eth0" May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.942 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:55.945899 containerd[1588]: 2025-05-13 04:48:55.944 [INFO][5343] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec" May 13 04:48:55.946722 containerd[1588]: time="2025-05-13T04:48:55.945889401Z" level=info msg="TearDown network for sandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\" successfully" May 13 04:48:55.952695 containerd[1588]: time="2025-05-13T04:48:55.952639613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:48:55.952960 containerd[1588]: time="2025-05-13T04:48:55.952718690Z" level=info msg="RemovePodSandbox \"947c260556aeed944101e5d4b243bf8553d14782905f183263cddec39d773bec\" returns successfully" May 13 04:48:55.953935 containerd[1588]: time="2025-05-13T04:48:55.953564648Z" level=info msg="StopPodSandbox for \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\"" May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:55.995 [WARNING][5368] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc357230-e098-4af5-9f42-e37066b7df6c", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb", Pod:"coredns-7db6d8ff4d-lgz9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72de0bbbd6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:55.995 [INFO][5368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:55.995 [INFO][5368] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" iface="eth0" netns="" May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:55.995 [INFO][5368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:55.995 [INFO][5368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:56.018 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:56.018 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:56.018 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:56.026 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:56.026 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:56.029 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:56.031316 containerd[1588]: 2025-05-13 04:48:56.030 [INFO][5368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:56.032635 containerd[1588]: time="2025-05-13T04:48:56.031734208Z" level=info msg="TearDown network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\" successfully" May 13 04:48:56.032635 containerd[1588]: time="2025-05-13T04:48:56.031891621Z" level=info msg="StopPodSandbox for \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\" returns successfully" May 13 04:48:56.033589 containerd[1588]: time="2025-05-13T04:48:56.033156191Z" level=info msg="RemovePodSandbox for \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\"" May 13 04:48:56.033589 containerd[1588]: time="2025-05-13T04:48:56.033220741Z" level=info msg="Forcibly stopping sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\"" May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.076 [WARNING][5393] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc357230-e098-4af5-9f42-e37066b7df6c", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-d261562a0f.novalocal", ContainerID:"32ad179100631432efc01fea170fe31545a819a039d33d25f1d22ee85cc3c5eb", Pod:"coredns-7db6d8ff4d-lgz9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72de0bbbd6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.076 [INFO][5393] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.076 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" iface="eth0" netns="" May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.076 [INFO][5393] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.076 [INFO][5393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.104 [INFO][5400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.105 [INFO][5400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.105 [INFO][5400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.113 [WARNING][5400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.113 [INFO][5400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" HandleID="k8s-pod-network.cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" Workload="ci--4081--3--3--n--d261562a0f.novalocal-k8s-coredns--7db6d8ff4d--lgz9g-eth0" May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.116 [INFO][5400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:48:56.119350 containerd[1588]: 2025-05-13 04:48:56.117 [INFO][5393] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817" May 13 04:48:56.121063 containerd[1588]: time="2025-05-13T04:48:56.120017859Z" level=info msg="TearDown network for sandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\" successfully" May 13 04:48:56.124266 containerd[1588]: time="2025-05-13T04:48:56.124142604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:48:56.124266 containerd[1588]: time="2025-05-13T04:48:56.124220519Z" level=info msg="RemovePodSandbox \"cb98ee0414e014bd7334cd8e8b2012ddfbacb0980f726dae141430776315c817\" returns successfully" May 13 04:48:56.638846 kubelet[2854]: I0513 04:48:56.636462 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d774d8cdb-sghzl" podStartSLOduration=33.850040661 podStartE2EDuration="42.63639791s" podCreationTimestamp="2025-05-13 04:48:14 +0000 UTC" firstStartedPulling="2025-05-13 04:48:46.631131506 +0000 UTC m=+52.798391391" lastFinishedPulling="2025-05-13 04:48:55.417488755 +0000 UTC m=+61.584748640" observedRunningTime="2025-05-13 04:48:56.633933562 +0000 UTC m=+62.801193507" watchObservedRunningTime="2025-05-13 04:48:56.63639791 +0000 UTC m=+62.803657845" May 13 04:48:57.995019 containerd[1588]: time="2025-05-13T04:48:57.993713635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:57.995899 containerd[1588]: time="2025-05-13T04:48:57.995810929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 04:48:57.996171 containerd[1588]: time="2025-05-13T04:48:57.996145733Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:57.999538 containerd[1588]: time="2025-05-13T04:48:57.999464335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:48:58.000493 containerd[1588]: time="2025-05-13T04:48:58.000456216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.579099906s" May 13 04:48:58.000566 containerd[1588]: time="2025-05-13T04:48:58.000521367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 04:48:58.006380 containerd[1588]: time="2025-05-13T04:48:58.006347679Z" level=info msg="CreateContainer within sandbox \"dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 04:48:58.036602 containerd[1588]: time="2025-05-13T04:48:58.036550710Z" level=info msg="CreateContainer within sandbox \"dc7db295fc44e7c2a3a2906a96b6a02a31df799797efe9eb8f9d44e5afe08d55\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3322a88fd54d55b5d285a199181c103b8bdc6c9a945f5aa3fbfc4a3de0dcb47d\"" May 13 04:48:58.039048 containerd[1588]: time="2025-05-13T04:48:58.039025438Z" level=info msg="StartContainer for \"3322a88fd54d55b5d285a199181c103b8bdc6c9a945f5aa3fbfc4a3de0dcb47d\"" May 13 04:48:58.133283 containerd[1588]: time="2025-05-13T04:48:58.133209169Z" level=info msg="StartContainer for \"3322a88fd54d55b5d285a199181c103b8bdc6c9a945f5aa3fbfc4a3de0dcb47d\" returns successfully" May 13 04:48:58.704065 kubelet[2854]: I0513 04:48:58.703583 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-glr49" podStartSLOduration=31.842672442 podStartE2EDuration="44.702958754s" podCreationTimestamp="2025-05-13 04:48:14 +0000 UTC" firstStartedPulling="2025-05-13 04:48:45.1421401 +0000 UTC m=+51.309399995" lastFinishedPulling="2025-05-13 04:48:58.002426412 +0000 UTC m=+64.169686307" observedRunningTime="2025-05-13 04:48:58.698367797 +0000 UTC m=+64.865627732" watchObservedRunningTime="2025-05-13 04:48:58.702958754 +0000 UTC m=+64.870218699" May 13 04:48:59.087047 kubelet[2854]: I0513 04:48:59.086660 2854 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 04:48:59.087047 kubelet[2854]: I0513 04:48:59.086798 2854 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 04:49:07.281685 kubelet[2854]: I0513 04:49:07.279096 2854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"