Apr 30 03:35:10.010886 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:35:10.010908 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:35:10.010916 kernel: BIOS-provided physical RAM map: Apr 30 03:35:10.010922 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:35:10.010927 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:35:10.010933 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:35:10.010939 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Apr 30 03:35:10.010945 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Apr 30 03:35:10.010952 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:35:10.010958 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 03:35:10.010964 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:35:10.010969 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:35:10.010975 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 03:35:10.010981 kernel: NX (Execute Disable) protection: active Apr 30 03:35:10.010989 kernel: APIC: Static calls initialized Apr 30 03:35:10.010995 kernel: SMBIOS 3.0.0 present. Apr 30 03:35:10.011002 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 30 03:35:10.011008 kernel: Hypervisor detected: KVM Apr 30 03:35:10.011014 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:35:10.011020 kernel: kvm-clock: using sched offset of 3187520121 cycles Apr 30 03:35:10.011027 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:35:10.011033 kernel: tsc: Detected 2495.312 MHz processor Apr 30 03:35:10.011040 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:35:10.011048 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:35:10.011054 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Apr 30 03:35:10.011061 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:35:10.011067 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:35:10.011074 kernel: Using GB pages for direct mapping Apr 30 03:35:10.011080 kernel: ACPI: Early table checksum verification disabled Apr 30 03:35:10.011086 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Apr 30 03:35:10.011092 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:35:10.011099 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:35:10.011106 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:35:10.011113 kernel: ACPI: FACS 0x000000007CFE0000 000040 Apr 30 03:35:10.011119 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:35:10.011125 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:35:10.011132 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:35:10.011138 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:35:10.011144 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Apr 30 03:35:10.011159 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Apr 30 03:35:10.011169 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Apr 30 03:35:10.011175 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Apr 30 03:35:10.011182 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Apr 30 03:35:10.011189 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Apr 30 03:35:10.011195 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Apr 30 03:35:10.011202 kernel: No NUMA configuration found Apr 30 03:35:10.011210 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Apr 30 03:35:10.011216 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Apr 30 03:35:10.011223 kernel: Zone ranges: Apr 30 03:35:10.011229 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:35:10.011236 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Apr 30 03:35:10.011243 kernel: Normal empty Apr 30 03:35:10.011250 kernel: Movable zone start for each node Apr 30 03:35:10.011256 kernel: Early memory node ranges Apr 30 03:35:10.011263 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:35:10.011269 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Apr 30 03:35:10.011277 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Apr 30 03:35:10.011284 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:35:10.011291 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:35:10.011297 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 03:35:10.011304 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:35:10.011310 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:35:10.011317 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:35:10.011324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:35:10.011330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:35:10.011338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:35:10.011345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:35:10.011351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:35:10.011358 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:35:10.011365 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:35:10.011371 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:35:10.011378 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:35:10.011385 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 03:35:10.011391 kernel: Booting paravirtualized kernel on KVM Apr 30 03:35:10.011399 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:35:10.011406 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:35:10.011413 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:35:10.011419 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:35:10.011426 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:35:10.011432 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:35:10.011440 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:35:10.011448 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:35:10.011455 kernel: random: crng init done Apr 30 03:35:10.011462 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:35:10.011469 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:35:10.011476 kernel: Fallback order for Node 0: 0 Apr 30 03:35:10.011482 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Apr 30 03:35:10.011489 kernel: Policy zone: DMA32 Apr 30 03:35:10.011495 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:35:10.011502 kernel: Memory: 1922048K/2047464K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125156K reserved, 0K cma-reserved) Apr 30 03:35:10.011509 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:35:10.011517 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:35:10.011524 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:35:10.011530 kernel: Dynamic Preempt: voluntary Apr 30 03:35:10.011537 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:35:10.011544 kernel: rcu: RCU event tracing is enabled. Apr 30 03:35:10.011551 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:35:10.011557 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:35:10.011564 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:35:10.011571 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:35:10.011577 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:35:10.011585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:35:10.011592 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:35:10.011599 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:35:10.011605 kernel: Console: colour VGA+ 80x25 Apr 30 03:35:10.012659 kernel: printk: console [tty0] enabled Apr 30 03:35:10.012666 kernel: printk: console [ttyS0] enabled Apr 30 03:35:10.012673 kernel: ACPI: Core revision 20230628 Apr 30 03:35:10.012680 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:35:10.012687 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:35:10.012696 kernel: x2apic enabled Apr 30 03:35:10.012703 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:35:10.012710 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:35:10.012717 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:35:10.012724 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Apr 30 03:35:10.012730 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:35:10.012737 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:35:10.012744 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:35:10.012758 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:35:10.012765 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:35:10.012772 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:35:10.012779 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:35:10.012787 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:35:10.012794 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:35:10.012801 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:35:10.012808 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:35:10.012815 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:35:10.012824 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:35:10.012831 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:35:10.012838 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:35:10.012845 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:35:10.012852 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:35:10.012859 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:35:10.012866 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:35:10.012872 kernel: landlock: Up and running. Apr 30 03:35:10.012881 kernel: SELinux: Initializing. Apr 30 03:35:10.012888 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:35:10.012895 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:35:10.012902 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:35:10.012909 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:35:10.012916 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:35:10.012924 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:35:10.012931 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:35:10.012938 kernel: ... version: 0 Apr 30 03:35:10.012946 kernel: ... bit width: 48 Apr 30 03:35:10.012953 kernel: ... generic registers: 6 Apr 30 03:35:10.012961 kernel: ... value mask: 0000ffffffffffff Apr 30 03:35:10.012968 kernel: ... max period: 00007fffffffffff Apr 30 03:35:10.012974 kernel: ... fixed-purpose events: 0 Apr 30 03:35:10.012981 kernel: ... event mask: 000000000000003f Apr 30 03:35:10.012988 kernel: signal: max sigframe size: 1776 Apr 30 03:35:10.012995 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:35:10.013003 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:35:10.013011 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:35:10.013018 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:35:10.013025 kernel: .... node #0, CPUs: #1 Apr 30 03:35:10.013032 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:35:10.013039 kernel: smpboot: Max logical packages: 1 Apr 30 03:35:10.013046 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Apr 30 03:35:10.013053 kernel: devtmpfs: initialized Apr 30 03:35:10.013060 kernel: x86/mm: Memory block size: 128MB Apr 30 03:35:10.013067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:35:10.013074 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:35:10.013083 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:35:10.013090 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:35:10.013097 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:35:10.013104 kernel: audit: type=2000 audit(1745984108.407:1): state=initialized audit_enabled=0 res=1 Apr 30 03:35:10.013111 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:35:10.013117 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:35:10.013124 kernel: cpuidle: using governor menu Apr 30 03:35:10.013131 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:35:10.013138 kernel: dca service started, version 1.12.1 Apr 30 03:35:10.013147 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:35:10.013163 kernel: PCI: Using configuration type 1 for base access Apr 30 03:35:10.013170 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:35:10.013177 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:35:10.013184 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:35:10.013191 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:35:10.013198 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:35:10.013205 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:35:10.013212 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:35:10.013220 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:35:10.013227 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:35:10.013234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:35:10.013241 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:35:10.013248 kernel: ACPI: Interpreter enabled Apr 30 03:35:10.013255 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:35:10.013262 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:35:10.013269 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:35:10.013276 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:35:10.013284 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:35:10.013292 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:35:10.013415 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:35:10.013491 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:35:10.013560 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:35:10.013570 kernel: PCI host bridge to bus 0000:00 Apr 30 03:35:10.013676 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:35:10.013746 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:35:10.013809 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:35:10.013872 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Apr 30 03:35:10.013934 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:35:10.013995 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 03:35:10.014058 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:35:10.014142 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:35:10.014237 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:35:10.014311 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Apr 30 03:35:10.014384 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Apr 30 03:35:10.014458 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Apr 30 03:35:10.014531 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Apr 30 03:35:10.014604 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:35:10.015105 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.015191 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Apr 30 03:35:10.015271 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.015343 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Apr 30 03:35:10.015420 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.015493 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Apr 30 03:35:10.015576 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.015665 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Apr 30 03:35:10.015748 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.015823 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Apr 30 03:35:10.015903 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.015977 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Apr 30 03:35:10.016061 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.016136 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Apr 30 03:35:10.016228 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.016302 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Apr 30 03:35:10.016385 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:35:10.016459 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Apr 30 03:35:10.016541 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:35:10.016961 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:35:10.017050 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:35:10.017122 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Apr 30 03:35:10.017204 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Apr 30 03:35:10.017280 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:35:10.017350 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 03:35:10.017435 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:35:10.017509 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Apr 30 03:35:10.017583 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 03:35:10.017678 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Apr 30 03:35:10.017750 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:35:10.017820 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:35:10.017890 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:35:10.017974 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 03:35:10.018047 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Apr 30 03:35:10.018118 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:35:10.018199 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:35:10.018271 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:35:10.018353 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 03:35:10.018432 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Apr 30 03:35:10.018508 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Apr 30 03:35:10.018581 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:35:10.018699 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:35:10.018783 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:35:10.018888 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 03:35:10.018963 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 03:35:10.019036 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:35:10.019106 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:35:10.019190 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:35:10.019274 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 03:35:10.019348 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Apr 30 03:35:10.019422 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Apr 30 03:35:10.019492 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:35:10.019561 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:35:10.019651 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:35:10.019760 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 03:35:10.019861 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Apr 30 03:35:10.019938 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Apr 30 03:35:10.020009 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:35:10.020079 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:35:10.020150 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:35:10.020174 kernel: acpiphp: Slot [0] registered Apr 30 03:35:10.020268 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:35:10.020380 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Apr 30 03:35:10.020458 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Apr 30 03:35:10.020532 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Apr 30 03:35:10.020604 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:35:10.020697 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:35:10.020769 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:35:10.020782 kernel: acpiphp: Slot [0-2] registered Apr 30 03:35:10.020853 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:35:10.020926 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:35:10.020998 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:35:10.021010 kernel: acpiphp: Slot [0-3] registered Apr 30 03:35:10.021103 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:35:10.021215 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:35:10.021315 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:35:10.021331 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:35:10.021338 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:35:10.021345 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:35:10.021352 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:35:10.021359 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:35:10.021367 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:35:10.021374 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:35:10.021381 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:35:10.021388 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:35:10.021396 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:35:10.021404 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:35:10.021411 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:35:10.021418 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:35:10.021425 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:35:10.021432 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:35:10.021439 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:35:10.021446 kernel: iommu: Default domain type: Translated Apr 30 03:35:10.021453 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:35:10.021462 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:35:10.021469 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:35:10.021475 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:35:10.021483 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Apr 30 03:35:10.021557 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:35:10.024661 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:35:10.024748 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:35:10.024758 kernel: vgaarb: loaded Apr 30 03:35:10.024766 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:35:10.024777 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:35:10.024784 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:35:10.024791 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:35:10.024799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:35:10.024806 kernel: pnp: PnP ACPI init Apr 30 03:35:10.024906 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:35:10.024919 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:35:10.024926 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:35:10.024937 kernel: NET: Registered PF_INET protocol family Apr 30 03:35:10.024944 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:35:10.024951 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:35:10.024959 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:35:10.024966 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:35:10.024974 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:35:10.024981 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:35:10.024989 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:35:10.024996 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:35:10.025006 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:35:10.025013 kernel: NET: Registered PF_XDP protocol family Apr 30 03:35:10.025088 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 03:35:10.025176 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 03:35:10.025252 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 03:35:10.025323 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 03:35:10.025394 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 03:35:10.025468 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 03:35:10.025538 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:35:10.026227 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:35:10.026325 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:35:10.026398 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:35:10.026470 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:35:10.026541 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:35:10.027644 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:35:10.027738 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:35:10.027812 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:35:10.027887 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:35:10.027984 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:35:10.028069 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:35:10.028142 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:35:10.028229 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:35:10.028312 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:35:10.028387 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:35:10.028458 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:35:10.028529 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:35:10.028599 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:35:10.029729 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 30 03:35:10.029802 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:35:10.029875 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:35:10.029948 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:35:10.030021 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 30 03:35:10.030097 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:35:10.030176 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:35:10.030247 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:35:10.030320 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 30 03:35:10.030391 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:35:10.030463 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:35:10.030533 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:35:10.030597 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:35:10.031568 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:35:10.031648 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Apr 30 03:35:10.031715 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:35:10.031778 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 03:35:10.031853 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 03:35:10.031920 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:35:10.031993 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 03:35:10.032736 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:35:10.032821 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 03:35:10.032892 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:35:10.032964 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 03:35:10.033033 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:35:10.033130 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 03:35:10.033212 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:35:10.033285 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 03:35:10.033357 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:35:10.033429 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 30 03:35:10.033496 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 03:35:10.033561 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:35:10.033651 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 30 03:35:10.033720 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Apr 30 03:35:10.033791 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:35:10.033864 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 30 03:35:10.033931 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 03:35:10.033997 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:35:10.034009 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:35:10.034017 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:35:10.034025 kernel: Initialise system trusted keyrings Apr 30 03:35:10.034033 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:35:10.034044 kernel: Key type asymmetric registered Apr 30 03:35:10.034052 kernel: Asymmetric key parser 'x509' registered Apr 30 03:35:10.034059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:35:10.034067 kernel: io scheduler mq-deadline registered Apr 30 03:35:10.034075 kernel: io scheduler kyber registered Apr 30 03:35:10.034083 kernel: io scheduler bfq registered Apr 30 03:35:10.034170 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 03:35:10.034247 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 03:35:10.034322 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 03:35:10.034398 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 03:35:10.034472 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 03:35:10.034545 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 03:35:10.037655 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 03:35:10.037742 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 03:35:10.037816 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 03:35:10.037887 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 03:35:10.037959 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 03:35:10.038033 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 03:35:10.038105 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 03:35:10.038187 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 03:35:10.038260 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 03:35:10.038331 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 03:35:10.038342 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:35:10.038479 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 30 03:35:10.038552 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 30 03:35:10.038562 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:35:10.038573 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 30 03:35:10.038582 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:35:10.038589 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:35:10.038597 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:35:10.038605 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:35:10.038626 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:35:10.038703 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:35:10.038715 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:35:10.038782 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:35:10.038846 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:35:09 UTC (1745984109) Apr 30 03:35:10.038911 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:35:10.038921 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:35:10.038929 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:35:10.038937 kernel: Segment Routing with IPv6 Apr 30 03:35:10.038945 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:35:10.038952 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:35:10.038963 kernel: Key type dns_resolver registered Apr 30 03:35:10.038970 kernel: IPI shorthand broadcast: enabled Apr 30 03:35:10.038978 kernel: sched_clock: Marking stable (1192011056, 142978579)->(1391929916, -56940281) Apr 30 03:35:10.038985 kernel: registered taskstats version 1 Apr 30 03:35:10.038993 kernel: Loading compiled-in X.509 certificates Apr 30 03:35:10.039001 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:35:10.039008 kernel: Key type .fscrypt registered Apr 30 03:35:10.039016 kernel: Key type fscrypt-provisioning registered Apr 30 03:35:10.039024 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:35:10.039033 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:35:10.039041 kernel: ima: No architecture policies found Apr 30 03:35:10.039049 kernel: clk: Disabling unused clocks Apr 30 03:35:10.039059 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:35:10.039070 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:35:10.039080 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:35:10.039091 kernel: Run /init as init process Apr 30 03:35:10.039100 kernel: with arguments: Apr 30 03:35:10.039110 kernel: /init Apr 30 03:35:10.039122 kernel: with environment: Apr 30 03:35:10.039131 kernel: HOME=/ Apr 30 03:35:10.039141 kernel: TERM=linux Apr 30 03:35:10.039160 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:35:10.039174 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:35:10.039188 systemd[1]: Detected virtualization kvm. Apr 30 03:35:10.039197 systemd[1]: Detected architecture x86-64. Apr 30 03:35:10.039207 systemd[1]: Running in initrd. Apr 30 03:35:10.039216 systemd[1]: No hostname configured, using default hostname. Apr 30 03:35:10.039223 systemd[1]: Hostname set to . Apr 30 03:35:10.039232 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:35:10.039240 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:35:10.039248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:35:10.039256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:35:10.039264 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:35:10.039273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:35:10.039282 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:35:10.039290 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:35:10.039299 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:35:10.039308 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:35:10.039316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:35:10.039324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:35:10.039333 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:35:10.039343 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:35:10.039351 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:35:10.039359 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:35:10.039367 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:35:10.039375 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:35:10.039383 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:35:10.039391 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:35:10.039399 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:35:10.039408 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:35:10.039417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:35:10.039425 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:35:10.039433 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:35:10.039441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:35:10.039449 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:35:10.039457 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:35:10.039465 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:35:10.039473 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:35:10.039502 systemd-journald[187]: Collecting audit messages is disabled. Apr 30 03:35:10.039524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:35:10.039532 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:35:10.039540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:35:10.039550 systemd-journald[187]: Journal started Apr 30 03:35:10.039569 systemd-journald[187]: Runtime Journal (/run/log/journal/8bd140bfff2e4cbcb2b4566fba9429a9) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:35:10.040640 systemd-modules-load[188]: Inserted module 'overlay' Apr 30 03:35:10.077109 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:35:10.077135 kernel: Bridge firewalling registered Apr 30 03:35:10.077144 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:35:10.068412 systemd-modules-load[188]: Inserted module 'br_netfilter' Apr 30 03:35:10.078405 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:35:10.078959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:35:10.079878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:35:10.085725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:35:10.087664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:35:10.089714 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:35:10.091749 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:35:10.103839 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:35:10.106473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:35:10.107882 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:35:10.114755 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:35:10.117304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:35:10.121021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:35:10.126983 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:35:10.129579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:35:10.137108 dracut-cmdline[223]: dracut-dracut-053 Apr 30 03:35:10.139588 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:35:10.143816 systemd-resolved[218]: Positive Trust Anchors: Apr 30 03:35:10.143830 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:35:10.143860 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:35:10.146687 systemd-resolved[218]: Defaulting to hostname 'linux'. Apr 30 03:35:10.148215 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:35:10.154034 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:35:10.200668 kernel: SCSI subsystem initialized Apr 30 03:35:10.209656 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:35:10.219680 kernel: iscsi: registered transport (tcp) Apr 30 03:35:10.237849 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:35:10.237898 kernel: QLogic iSCSI HBA Driver Apr 30 03:35:10.284258 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:35:10.290825 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:35:10.313954 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:35:10.314017 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:35:10.314039 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:35:10.357665 kernel: raid6: avx2x4 gen() 30985 MB/s Apr 30 03:35:10.374658 kernel: raid6: avx2x2 gen() 31986 MB/s Apr 30 03:35:10.391799 kernel: raid6: avx2x1 gen() 26412 MB/s Apr 30 03:35:10.391843 kernel: raid6: using algorithm avx2x2 gen() 31986 MB/s Apr 30 03:35:10.409868 kernel: raid6: .... xor() 20518 MB/s, rmw enabled Apr 30 03:35:10.409915 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:35:10.429672 kernel: xor: automatically using best checksumming function avx Apr 30 03:35:10.574644 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:35:10.587734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:35:10.595884 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:35:10.606067 systemd-udevd[406]: Using default interface naming scheme 'v255'. Apr 30 03:35:10.609845 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:35:10.618880 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:35:10.635220 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 30 03:35:10.668965 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:35:10.674771 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:35:10.739184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:35:10.746873 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:35:10.758797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:35:10.760059 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:35:10.761128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:35:10.761556 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:35:10.768759 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:35:10.785634 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:35:10.832650 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:35:10.841702 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 03:35:10.847653 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:35:10.861242 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:35:10.861366 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:35:10.883065 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:35:10.885674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:35:10.885830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:35:10.888720 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:35:10.892349 kernel: libata version 3.00 loaded. Apr 30 03:35:10.899119 kernel: ACPI: bus type USB registered Apr 30 03:35:10.899176 kernel: usbcore: registered new interface driver usbfs Apr 30 03:35:10.899187 kernel: usbcore: registered new interface driver hub Apr 30 03:35:10.899196 kernel: usbcore: registered new device driver usb Apr 30 03:35:10.898860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:35:10.922652 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:35:10.964204 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:35:10.964221 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:35:10.964231 kernel: AES CTR mode by8 optimization enabled Apr 30 03:35:10.964240 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:35:10.964352 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 03:35:10.964452 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 03:35:10.964543 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:35:10.964681 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:35:10.964769 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:35:10.964856 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 03:35:10.964942 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 03:35:10.965030 kernel: hub 1-0:1.0: USB hub found Apr 30 03:35:10.965147 kernel: hub 1-0:1.0: 4 ports detected Apr 30 03:35:10.965254 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 03:35:10.965390 kernel: hub 2-0:1.0: USB hub found Apr 30 03:35:10.965495 kernel: hub 2-0:1.0: 4 ports detected Apr 30 03:35:10.965587 kernel: scsi host1: ahci Apr 30 03:35:10.965711 kernel: scsi host2: ahci Apr 30 03:35:10.965799 kernel: scsi host3: ahci Apr 30 03:35:10.965890 kernel: scsi host4: ahci Apr 30 03:35:10.965972 kernel: scsi host5: ahci Apr 30 03:35:10.966055 kernel: scsi host6: ahci Apr 30 03:35:10.966136 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Apr 30 03:35:10.966146 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Apr 30 03:35:10.966155 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Apr 30 03:35:10.966176 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Apr 30 03:35:10.966185 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Apr 30 03:35:10.966194 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Apr 30 03:35:10.969628 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 30 03:35:10.974481 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 03:35:10.974583 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:35:10.974684 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 30 03:35:10.974772 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:35:10.974864 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:35:10.974874 kernel: GPT:17805311 != 80003071 Apr 30 03:35:10.974883 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:35:10.974892 kernel: GPT:17805311 != 80003071 Apr 30 03:35:10.974900 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:35:10.974909 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:35:10.974919 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:35:11.024771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:35:11.036860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:35:11.050530 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:35:11.192820 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 03:35:11.282099 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:35:11.282228 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 03:35:11.282271 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:35:11.283681 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:35:11.288655 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:35:11.288698 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:35:11.292386 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:35:11.294833 kernel: ata1.00: applying bridge limits Apr 30 03:35:11.297038 kernel: ata1.00: configured for UDMA/100 Apr 30 03:35:11.311660 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:35:11.361644 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:35:11.375652 kernel: usbcore: registered new interface driver usbhid Apr 30 03:35:11.375724 kernel: usbhid: USB HID core driver Apr 30 03:35:11.386283 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:35:11.417121 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:35:11.417144 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 30 03:35:11.417160 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (476) Apr 30 03:35:11.417198 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (458) Apr 30 03:35:11.417214 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 03:35:11.417408 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:35:11.412264 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 03:35:11.424747 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:35:11.437858 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 03:35:11.439708 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 03:35:11.448732 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 03:35:11.459734 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:35:11.467107 disk-uuid[578]: Primary Header is updated. Apr 30 03:35:11.467107 disk-uuid[578]: Secondary Entries is updated. Apr 30 03:35:11.467107 disk-uuid[578]: Secondary Header is updated. Apr 30 03:35:11.476647 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:35:11.487849 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:35:12.498708 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:35:12.498786 disk-uuid[580]: The operation has completed successfully. Apr 30 03:35:12.564467 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:35:12.564656 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:35:12.604787 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:35:12.609271 sh[600]: Success Apr 30 03:35:12.630787 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:35:12.701154 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:35:12.721833 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:35:12.727762 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:35:12.744358 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:35:12.744443 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:35:12.747873 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:35:12.752949 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:35:12.752997 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:35:12.764702 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:35:12.767016 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:35:12.768685 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:35:12.773911 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:35:12.778837 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:35:12.802124 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:35:12.802188 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:35:12.802214 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:35:12.809587 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:35:12.809637 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:35:12.824638 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:35:12.824731 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:35:12.832326 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:35:12.839837 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:35:12.910382 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:35:12.919370 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:35:12.939381 ignition[698]: Ignition 2.19.0 Apr 30 03:35:12.939390 ignition[698]: Stage: fetch-offline Apr 30 03:35:12.939421 ignition[698]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:35:12.939428 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:35:12.939653 ignition[698]: parsed url from cmdline: "" Apr 30 03:35:12.939656 ignition[698]: no config URL provided Apr 30 03:35:12.939661 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:35:12.939667 ignition[698]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:35:12.939672 ignition[698]: failed to fetch config: resource requires networking Apr 30 03:35:12.939820 ignition[698]: Ignition finished successfully Apr 30 03:35:12.943823 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:35:12.947518 systemd-networkd[781]: lo: Link UP Apr 30 03:35:12.947527 systemd-networkd[781]: lo: Gained carrier Apr 30 03:35:12.949317 systemd-networkd[781]: Enumeration completed Apr 30 03:35:12.949509 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:35:12.950299 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:12.950302 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:35:12.951147 systemd[1]: Reached target network.target - Network. Apr 30 03:35:12.952595 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:12.952598 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:35:12.953117 systemd-networkd[781]: eth0: Link UP Apr 30 03:35:12.953119 systemd-networkd[781]: eth0: Gained carrier Apr 30 03:35:12.953124 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:12.955920 systemd-networkd[781]: eth1: Link UP Apr 30 03:35:12.955923 systemd-networkd[781]: eth1: Gained carrier Apr 30 03:35:12.955928 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:12.955935 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:35:12.969782 ignition[788]: Ignition 2.19.0 Apr 30 03:35:12.969794 ignition[788]: Stage: fetch Apr 30 03:35:12.970012 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:35:12.970021 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:35:12.970109 ignition[788]: parsed url from cmdline: "" Apr 30 03:35:12.970111 ignition[788]: no config URL provided Apr 30 03:35:12.970116 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:35:12.970123 ignition[788]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:35:12.970141 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 03:35:12.970300 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 03:35:12.990684 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:35:13.021721 systemd-networkd[781]: eth0: DHCPv4 address 135.181.100.111/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:35:13.170839 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 03:35:13.176877 ignition[788]: GET result: OK Apr 30 03:35:13.177041 ignition[788]: parsing config with SHA512: f234cde8f36ec45232dec6c69f242a966c4e01cb1917a11f0c2cbd4699ca0b86e59e49a9bd2a7109289526842cbcbbbbd3532a8363944e047e659c6d7bf661b6 Apr 30 03:35:13.183517 unknown[788]: fetched base config from "system" Apr 30 03:35:13.183535 unknown[788]: fetched base config from "system" Apr 30 03:35:13.184200 ignition[788]: fetch: fetch complete Apr 30 03:35:13.183545 unknown[788]: fetched user config from "hetzner" Apr 30 03:35:13.184211 ignition[788]: fetch: fetch passed Apr 30 03:35:13.187875 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:35:13.184275 ignition[788]: Ignition finished successfully Apr 30 03:35:13.194903 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:35:13.225248 ignition[796]: Ignition 2.19.0 Apr 30 03:35:13.226727 ignition[796]: Stage: kargs Apr 30 03:35:13.227095 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:35:13.227113 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:35:13.229034 ignition[796]: kargs: kargs passed Apr 30 03:35:13.229108 ignition[796]: Ignition finished successfully Apr 30 03:35:13.231077 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:35:13.238831 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:35:13.272468 ignition[802]: Ignition 2.19.0 Apr 30 03:35:13.272487 ignition[802]: Stage: disks Apr 30 03:35:13.272803 ignition[802]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:35:13.272829 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:35:13.276085 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:35:13.274434 ignition[802]: disks: disks passed Apr 30 03:35:13.278579 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:35:13.274498 ignition[802]: Ignition finished successfully Apr 30 03:35:13.281499 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:35:13.283447 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:35:13.285790 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:35:13.287774 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:35:13.303845 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:35:13.327800 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 03:35:13.331247 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:35:13.338767 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:35:13.465642 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:35:13.467170 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:35:13.468783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:35:13.475748 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:35:13.478234 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:35:13.480752 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:35:13.483073 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:35:13.483958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:35:13.490648 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (818) Apr 30 03:35:13.494966 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:35:13.498897 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:35:13.498920 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:35:13.498929 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:35:13.508744 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:35:13.512399 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:35:13.512425 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:35:13.516330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:35:13.560404 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:35:13.565223 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:35:13.569489 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:35:13.570252 coreos-metadata[820]: Apr 30 03:35:13.569 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 03:35:13.571865 coreos-metadata[820]: Apr 30 03:35:13.570 INFO Fetch successful Apr 30 03:35:13.571865 coreos-metadata[820]: Apr 30 03:35:13.570 INFO wrote hostname ci-4081-3-3-9-5ae3ade3a2 to /sysroot/etc/hostname Apr 30 03:35:13.573698 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:35:13.576896 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:35:13.670724 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:35:13.675774 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:35:13.678807 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:35:13.687654 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:35:13.713327 ignition[939]: INFO : Ignition 2.19.0 Apr 30 03:35:13.715436 ignition[939]: INFO : Stage: mount Apr 30 03:35:13.715436 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:35:13.715436 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:35:13.719750 ignition[939]: INFO : mount: mount passed Apr 30 03:35:13.719750 ignition[939]: INFO : Ignition finished successfully Apr 30 03:35:13.719249 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:35:13.726828 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:35:13.728457 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:35:13.739735 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:35:13.744891 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:35:13.759630 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (950) Apr 30 03:35:13.762017 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:35:13.762074 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:35:13.764169 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:35:13.778936 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:35:13.778982 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:35:13.783090 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:35:13.814058 ignition[967]: INFO : Ignition 2.19.0 Apr 30 03:35:13.814058 ignition[967]: INFO : Stage: files Apr 30 03:35:13.815827 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:35:13.815827 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:35:13.818008 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:35:13.818008 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:35:13.818008 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:35:13.821695 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:35:13.823604 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:35:13.823604 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:35:13.822271 unknown[967]: wrote ssh authorized keys file for user: core Apr 30 03:35:13.826874 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:35:13.826874 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:35:14.052115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:35:14.531958 systemd-networkd[781]: eth0: Gained IPv6LL Apr 30 03:35:14.915821 systemd-networkd[781]: eth1: Gained IPv6LL Apr 30 03:35:15.416171 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:35:15.416171 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:35:15.420535 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:35:15.912733 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:35:16.167422 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:35:16.167422 ignition[967]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:35:16.171359 ignition[967]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:35:16.171359 ignition[967]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:35:16.171359 ignition[967]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:35:16.171359 ignition[967]: INFO : files: files passed Apr 30 03:35:16.171359 ignition[967]: INFO : Ignition finished successfully Apr 30 03:35:16.169948 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:35:16.182919 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:35:16.185732 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:35:16.191497 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:35:16.200368 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:35:16.200368 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:35:16.191571 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:35:16.202412 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:35:16.203230 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:35:16.205656 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:35:16.211802 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:35:16.236520 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:35:16.237193 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:35:16.237844 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:35:16.238285 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:35:16.238813 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:35:16.245901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:35:16.260377 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:35:16.265833 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:35:16.275669 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:35:16.276960 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:35:16.277567 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:35:16.278753 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:35:16.278855 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:35:16.280162 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:35:16.280941 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:35:16.282061 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:35:16.283101 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:35:16.284116 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:35:16.285304 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:35:16.286541 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:35:16.287749 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:35:16.288921 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:35:16.290128 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:35:16.291229 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:35:16.291349 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:35:16.292565 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:35:16.293335 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:35:16.294361 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:35:16.294709 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:35:16.295549 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:35:16.295663 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:35:16.297273 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:35:16.297379 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:35:16.298082 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:35:16.298201 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:35:16.299048 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:35:16.299133 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:35:16.305869 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:35:16.308797 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:35:16.309307 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:35:16.309444 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:35:16.311089 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:35:16.311221 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:35:16.318502 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:35:16.318578 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:35:16.324582 ignition[1020]: INFO : Ignition 2.19.0 Apr 30 03:35:16.324582 ignition[1020]: INFO : Stage: umount Apr 30 03:35:16.327551 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:35:16.327551 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:35:16.327551 ignition[1020]: INFO : umount: umount passed Apr 30 03:35:16.327551 ignition[1020]: INFO : Ignition finished successfully Apr 30 03:35:16.326998 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:35:16.327115 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:35:16.328694 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:35:16.328775 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:35:16.329585 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:35:16.329652 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:35:16.335555 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:35:16.335597 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:35:16.345277 systemd[1]: Stopped target network.target - Network. Apr 30 03:35:16.346127 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:35:16.346199 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:35:16.347370 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:35:16.349140 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:35:16.349440 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:35:16.350268 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:35:16.351328 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:35:16.352443 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:35:16.352482 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:35:16.353565 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:35:16.353594 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:35:16.354938 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:35:16.354981 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:35:16.365569 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:35:16.365632 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:35:16.366757 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:35:16.368046 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:35:16.370249 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:35:16.371679 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 30 03:35:16.374658 systemd-networkd[781]: eth1: DHCPv6 lease lost Apr 30 03:35:16.377974 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:35:16.378071 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:35:16.381559 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:35:16.381702 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:35:16.383693 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:35:16.383807 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:35:16.386442 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:35:16.386491 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:35:16.387913 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:35:16.387964 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:35:16.394720 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:35:16.395186 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:35:16.395242 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:35:16.397661 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:35:16.397738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:35:16.399104 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:35:16.399155 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:35:16.400603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:35:16.400680 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:35:16.403027 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:35:16.413411 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:35:16.413519 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:35:16.415944 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:35:16.416058 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:35:16.417749 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:35:16.417796 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:35:16.418946 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:35:16.418973 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:35:16.420405 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:35:16.420440 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:35:16.422628 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:35:16.422662 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:35:16.424145 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:35:16.424180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:35:16.431716 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:35:16.433111 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:35:16.433202 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:35:16.434077 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:35:16.434128 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:35:16.435863 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:35:16.435902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:35:16.436654 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:35:16.436686 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:35:16.438489 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:35:16.438553 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:35:16.440125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:35:16.449887 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:35:16.454603 systemd[1]: Switching root. Apr 30 03:35:16.501227 systemd-journald[187]: Journal stopped Apr 30 03:35:17.569512 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Apr 30 03:35:17.569562 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:35:17.569574 kernel: SELinux: policy capability open_perms=1 Apr 30 03:35:17.569582 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:35:17.569591 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:35:17.569601 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:35:17.569622 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:35:17.569631 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:35:17.569640 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:35:17.569649 kernel: audit: type=1403 audit(1745984116.671:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:35:17.569664 systemd[1]: Successfully loaded SELinux policy in 58.200ms. Apr 30 03:35:17.569682 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.063ms. Apr 30 03:35:17.569693 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:35:17.569703 systemd[1]: Detected virtualization kvm. Apr 30 03:35:17.569715 systemd[1]: Detected architecture x86-64. Apr 30 03:35:17.569724 systemd[1]: Detected first boot. Apr 30 03:35:17.569734 systemd[1]: Hostname set to . Apr 30 03:35:17.569743 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:35:17.569753 zram_generator::config[1063]: No configuration found. Apr 30 03:35:17.569767 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:35:17.569776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:35:17.569785 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:35:17.569796 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:35:17.569807 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:35:17.569816 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:35:17.569826 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:35:17.569835 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:35:17.569845 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:35:17.569855 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:35:17.569864 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:35:17.569875 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:35:17.569890 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:35:17.569904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:35:17.569917 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:35:17.569931 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:35:17.569945 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:35:17.569959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:35:17.569970 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:35:17.569979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:35:17.569989 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:35:17.570001 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:35:17.570010 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:35:17.570020 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:35:17.570030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:35:17.570042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:35:17.570052 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:35:17.570064 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:35:17.570073 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:35:17.570086 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:35:17.570095 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:35:17.570105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:35:17.570115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:35:17.570132 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:35:17.570146 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:35:17.570160 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:35:17.570171 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:35:17.570183 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:17.570196 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:35:17.570207 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:35:17.570216 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:35:17.570236 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:35:17.570247 systemd[1]: Reached target machines.target - Containers. Apr 30 03:35:17.570257 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:35:17.570267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:35:17.570276 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:35:17.570286 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:35:17.570296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:35:17.570306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:35:17.570317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:35:17.570328 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:35:17.570338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:35:17.570348 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:35:17.570357 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:35:17.570369 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:35:17.570378 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:35:17.570388 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:35:17.570399 kernel: loop: module loaded Apr 30 03:35:17.570408 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:35:17.570420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:35:17.570429 kernel: ACPI: bus type drm_connector registered Apr 30 03:35:17.570438 kernel: fuse: init (API version 7.39) Apr 30 03:35:17.570448 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:35:17.570457 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:35:17.570467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:35:17.570477 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:35:17.570487 systemd[1]: Stopped verity-setup.service. Apr 30 03:35:17.570497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:17.570522 systemd-journald[1150]: Collecting audit messages is disabled. Apr 30 03:35:17.570543 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:35:17.570553 systemd-journald[1150]: Journal started Apr 30 03:35:17.570573 systemd-journald[1150]: Runtime Journal (/run/log/journal/8bd140bfff2e4cbcb2b4566fba9429a9) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:35:17.249877 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:35:17.265954 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:35:17.266604 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:35:17.573632 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:35:17.573517 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:35:17.574173 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:35:17.574796 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:35:17.576499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:35:17.577102 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:35:17.577793 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:35:17.578517 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:35:17.579247 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:35:17.579394 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:35:17.580258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:35:17.580401 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:35:17.581083 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:35:17.581238 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:35:17.581967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:35:17.582119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:35:17.582850 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:35:17.582997 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:35:17.583804 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:35:17.583950 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:35:17.584640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:35:17.585379 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:35:17.586206 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:35:17.593963 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:35:17.600666 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:35:17.605667 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:35:17.606674 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:35:17.606752 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:35:17.608191 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:35:17.613021 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:35:17.616708 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:35:17.617442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:35:17.619738 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:35:17.621818 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:35:17.622690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:35:17.628737 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:35:17.630142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:35:17.633010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:35:17.641846 systemd-journald[1150]: Time spent on flushing to /var/log/journal/8bd140bfff2e4cbcb2b4566fba9429a9 is 57.720ms for 1129 entries. Apr 30 03:35:17.641846 systemd-journald[1150]: System Journal (/var/log/journal/8bd140bfff2e4cbcb2b4566fba9429a9) is 8.0M, max 584.8M, 576.8M free. Apr 30 03:35:17.716920 systemd-journald[1150]: Received client request to flush runtime journal. Apr 30 03:35:17.716952 kernel: loop0: detected capacity change from 0 to 142488 Apr 30 03:35:17.639751 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:35:17.643356 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:35:17.645863 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:35:17.646778 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:35:17.647418 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:35:17.651187 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:35:17.653088 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:35:17.659566 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:35:17.684057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:35:17.697806 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:35:17.702667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:35:17.719577 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:35:17.727054 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:35:17.728956 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Apr 30 03:35:17.728972 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Apr 30 03:35:17.736680 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:35:17.739909 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:35:17.745805 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:35:17.749897 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:35:17.750542 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:35:17.757634 kernel: loop1: detected capacity change from 0 to 140768 Apr 30 03:35:17.784463 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:35:17.792774 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:35:17.799069 kernel: loop2: detected capacity change from 0 to 210664 Apr 30 03:35:17.806893 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 30 03:35:17.807193 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 30 03:35:17.811881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:35:17.844644 kernel: loop3: detected capacity change from 0 to 8 Apr 30 03:35:17.863646 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:35:17.884649 kernel: loop5: detected capacity change from 0 to 140768 Apr 30 03:35:17.910638 kernel: loop6: detected capacity change from 0 to 210664 Apr 30 03:35:17.940795 kernel: loop7: detected capacity change from 0 to 8 Apr 30 03:35:17.942249 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 03:35:17.942664 (sd-merge)[1211]: Merged extensions into '/usr'. Apr 30 03:35:17.946454 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:35:17.946533 systemd[1]: Reloading... Apr 30 03:35:18.012640 zram_generator::config[1233]: No configuration found. Apr 30 03:35:18.112659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:35:18.164250 systemd[1]: Reloading finished in 217 ms. Apr 30 03:35:18.187638 ldconfig[1178]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:35:18.188746 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:35:18.197755 systemd[1]: Starting ensure-sysext.service... Apr 30 03:35:18.199411 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:35:18.200400 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:35:18.221360 systemd[1]: Reloading requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:35:18.221372 systemd[1]: Reloading... Apr 30 03:35:18.239326 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:35:18.239914 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:35:18.240683 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:35:18.240985 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Apr 30 03:35:18.241081 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Apr 30 03:35:18.243599 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:35:18.243685 systemd-tmpfiles[1280]: Skipping /boot Apr 30 03:35:18.249726 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:35:18.249792 systemd-tmpfiles[1280]: Skipping /boot Apr 30 03:35:18.311385 zram_generator::config[1316]: No configuration found. Apr 30 03:35:18.404535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:35:18.454881 systemd[1]: Reloading finished in 233 ms. Apr 30 03:35:18.470872 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:35:18.471865 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:35:18.483749 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:35:18.493767 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:35:18.497735 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:35:18.503743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:35:18.513840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:35:18.520821 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:35:18.527669 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:35:18.531127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:18.531400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:35:18.538210 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:35:18.541788 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:35:18.544322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:35:18.545907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:35:18.546007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:18.546679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:35:18.546839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:35:18.554380 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:18.554553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:35:18.561288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:35:18.562751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:35:18.562894 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:18.564776 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:35:18.575474 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:18.576682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:35:18.581877 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Apr 30 03:35:18.582180 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:35:18.583122 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:35:18.583255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:18.583793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:35:18.584334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:35:18.586659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:35:18.587784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:35:18.592352 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:35:18.592503 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:35:18.595245 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:35:18.595411 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:35:18.598933 systemd[1]: Finished ensure-sysext.service. Apr 30 03:35:18.599591 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:35:18.604344 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:35:18.606090 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:35:18.607457 augenrules[1390]: No rules Apr 30 03:35:18.606217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:35:18.607932 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:35:18.621776 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:35:18.623798 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:35:18.630501 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:35:18.631081 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:35:18.643148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:35:18.651730 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:35:18.652406 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:35:18.702574 systemd-resolved[1357]: Positive Trust Anchors: Apr 30 03:35:18.702593 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:35:18.703183 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:35:18.711761 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:35:18.714669 systemd-resolved[1357]: Using system hostname 'ci-4081-3-3-9-5ae3ade3a2'. Apr 30 03:35:18.715829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:35:18.716400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:35:18.750995 systemd-networkd[1402]: lo: Link UP Apr 30 03:35:18.751007 systemd-networkd[1402]: lo: Gained carrier Apr 30 03:35:18.752078 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:35:18.752702 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:35:18.754861 systemd-networkd[1402]: Enumeration completed Apr 30 03:35:18.754919 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:35:18.755758 systemd[1]: Reached target network.target - Network. Apr 30 03:35:18.757591 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:18.757599 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:35:18.758034 systemd-timesyncd[1398]: No network connectivity, watching for changes. Apr 30 03:35:18.758917 systemd-networkd[1402]: eth0: Link UP Apr 30 03:35:18.758924 systemd-networkd[1402]: eth0: Gained carrier Apr 30 03:35:18.758935 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:18.763664 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:35:18.789100 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:18.796739 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:35:18.801672 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:35:18.804264 systemd-networkd[1402]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:18.804275 systemd-networkd[1402]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:35:18.805521 systemd-networkd[1402]: eth1: Link UP Apr 30 03:35:18.805530 systemd-networkd[1402]: eth1: Gained carrier Apr 30 03:35:18.805541 systemd-networkd[1402]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:35:18.810677 systemd-networkd[1402]: eth0: DHCPv4 address 135.181.100.111/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:35:18.812084 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Apr 30 03:35:18.818669 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1404) Apr 30 03:35:18.828669 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:35:18.839728 systemd-networkd[1402]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:35:18.840442 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 03:35:18.842888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:18.843742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:35:18.850810 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:35:18.853464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:35:18.855963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:35:18.857016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:35:18.857050 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:35:18.857060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:35:18.857333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:35:18.857453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:35:18.869946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:35:18.870111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:35:18.870775 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:35:18.874628 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:35:18.879929 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:35:18.888567 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:35:18.888724 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:35:18.888819 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 03:35:18.881816 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:35:18.886253 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:35:18.886390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:35:18.887260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:35:18.911545 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:35:18.926512 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 30 03:35:18.937175 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 30 03:35:18.939647 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:35:18.940900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:35:18.947051 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:35:18.951820 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:35:18.951868 kernel: [drm] features: -context_init Apr 30 03:35:18.952915 kernel: [drm] number of scanouts: 1 Apr 30 03:35:18.953668 kernel: [drm] number of cap sets: 0 Apr 30 03:35:18.955633 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 03:35:18.962641 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:35:18.962688 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 03:35:18.967654 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:35:18.971903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:35:18.972068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:35:18.975797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:35:18.977828 systemd-timesyncd[1398]: Contacted time server 168.119.211.223:123 (1.flatcar.pool.ntp.org). Apr 30 03:35:18.977921 systemd-timesyncd[1398]: Initial clock synchronization to Wed 2025-04-30 03:35:19.041561 UTC. Apr 30 03:35:18.981099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:35:18.981366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:35:18.983632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:35:19.060074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:35:19.097148 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:35:19.107863 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:35:19.121763 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:35:19.151875 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:35:19.152270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:35:19.152404 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:35:19.153027 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:35:19.153900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:35:19.154417 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:35:19.154895 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:35:19.155011 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:35:19.155321 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:35:19.155364 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:35:19.155493 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:35:19.158849 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:35:19.161959 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:35:19.171813 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:35:19.173163 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:35:19.173553 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:35:19.173690 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:35:19.174262 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:35:19.176280 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:35:19.176312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:35:19.177726 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:35:19.184758 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:35:19.185284 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:35:19.190824 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:35:19.203725 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:35:19.206806 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:35:19.207236 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:35:19.211785 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:35:19.244833 coreos-metadata[1471]: Apr 30 03:35:19.235 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 03:35:19.244833 coreos-metadata[1471]: Apr 30 03:35:19.239 INFO Fetch successful Apr 30 03:35:19.244833 coreos-metadata[1471]: Apr 30 03:35:19.239 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 03:35:19.244833 coreos-metadata[1471]: Apr 30 03:35:19.239 INFO Fetch successful Apr 30 03:35:19.217711 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:35:19.253093 dbus-daemon[1472]: [system] SELinux support is enabled Apr 30 03:35:19.253575 jq[1473]: false Apr 30 03:35:19.220864 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 03:35:19.225841 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:35:19.230872 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:35:19.239759 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:35:19.242851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:35:19.243303 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:35:19.250857 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:35:19.254372 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:35:19.255916 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:35:19.262008 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:35:19.266133 jq[1486]: true Apr 30 03:35:19.275993 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:35:19.276836 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:35:19.277064 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:35:19.277180 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:35:19.295942 update_engine[1484]: I20250430 03:35:19.295877 1484 main.cc:92] Flatcar Update Engine starting Apr 30 03:35:19.299821 update_engine[1484]: I20250430 03:35:19.298779 1484 update_check_scheduler.cc:74] Next update check in 7m28s Apr 30 03:35:19.303350 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:35:19.307730 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:35:19.307767 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:35:19.308186 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:35:19.308206 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:35:19.316785 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:35:19.329538 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:35:19.336706 jq[1496]: true Apr 30 03:35:19.339644 extend-filesystems[1476]: Found loop4 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found loop5 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found loop6 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found loop7 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found sda Apr 30 03:35:19.339644 extend-filesystems[1476]: Found sda1 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found sda2 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found sda3 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found usr Apr 30 03:35:19.339644 extend-filesystems[1476]: Found sda4 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found sda6 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found sda7 Apr 30 03:35:19.339644 extend-filesystems[1476]: Found sda9 Apr 30 03:35:19.339644 extend-filesystems[1476]: Checking size of /dev/sda9 Apr 30 03:35:19.386452 extend-filesystems[1476]: Resized partition /dev/sda9 Apr 30 03:35:19.395290 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 03:35:19.356023 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:35:19.395494 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:35:19.401350 tar[1499]: linux-amd64/helm Apr 30 03:35:19.356754 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:35:19.377140 systemd-logind[1483]: New seat seat0. Apr 30 03:35:19.405467 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 03:35:19.405482 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:35:19.405656 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:35:19.427974 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:35:19.433945 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:35:19.499598 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1405) Apr 30 03:35:19.552142 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:35:19.552917 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:35:19.564858 systemd[1]: Starting sshkeys.service... Apr 30 03:35:19.610995 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:35:19.606435 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:35:19.618880 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:35:19.645489 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 03:35:19.647283 extend-filesystems[1521]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 03:35:19.647283 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 03:35:19.647283 extend-filesystems[1521]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 03:35:19.650028 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:35:19.661292 extend-filesystems[1476]: Resized filesystem in /dev/sda9 Apr 30 03:35:19.661292 extend-filesystems[1476]: Found sr0 Apr 30 03:35:19.668564 coreos-metadata[1555]: Apr 30 03:35:19.664 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 03:35:19.661935 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:35:19.662090 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:35:19.672101 coreos-metadata[1555]: Apr 30 03:35:19.671 INFO Fetch successful Apr 30 03:35:19.674016 unknown[1555]: wrote ssh authorized keys file for user: core Apr 30 03:35:19.678347 containerd[1507]: time="2025-04-30T03:35:19.677826718Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:35:19.680187 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:35:19.693539 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:35:19.707392 update-ssh-keys[1567]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:35:19.709978 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:35:19.717873 systemd[1]: Finished sshkeys.service. Apr 30 03:35:19.718642 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:35:19.718777 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:35:19.731870 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:35:19.743898 containerd[1507]: time="2025-04-30T03:35:19.742667873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744108033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744142731Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744157032Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744309107Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744326235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744381757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744392180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744527190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744539480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744550478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744603 containerd[1507]: time="2025-04-30T03:35:19.744559163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744809 containerd[1507]: time="2025-04-30T03:35:19.744625834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:35:19.744809 containerd[1507]: time="2025-04-30T03:35:19.744791332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:35:19.745098 containerd[1507]: time="2025-04-30T03:35:19.744867355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:35:19.745098 containerd[1507]: time="2025-04-30T03:35:19.744886088Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:35:19.745098 containerd[1507]: time="2025-04-30T03:35:19.744965596Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:35:19.745098 containerd[1507]: time="2025-04-30T03:35:19.745034954Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:35:19.748805 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:35:19.757685 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:35:19.759970 containerd[1507]: time="2025-04-30T03:35:19.759932047Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:35:19.760005 containerd[1507]: time="2025-04-30T03:35:19.759994043Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:35:19.760023 containerd[1507]: time="2025-04-30T03:35:19.760008989Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:35:19.760048 containerd[1507]: time="2025-04-30T03:35:19.760036407Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:35:19.760066 containerd[1507]: time="2025-04-30T03:35:19.760052505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:35:19.760218 containerd[1507]: time="2025-04-30T03:35:19.760194372Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:35:19.760466 containerd[1507]: time="2025-04-30T03:35:19.760445466Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:35:19.760547 containerd[1507]: time="2025-04-30T03:35:19.760527095Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:35:19.760567 containerd[1507]: time="2025-04-30T03:35:19.760547323Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:35:19.760567 containerd[1507]: time="2025-04-30T03:35:19.760558956Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:35:19.760600 containerd[1507]: time="2025-04-30T03:35:19.760571953Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:35:19.760600 containerd[1507]: time="2025-04-30T03:35:19.760583698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:35:19.760600 containerd[1507]: time="2025-04-30T03:35:19.760594342Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:35:19.760669 containerd[1507]: time="2025-04-30T03:35:19.760606945Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:35:19.763174 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:35:19.765539 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:35:19.767321 containerd[1507]: time="2025-04-30T03:35:19.766726590Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:35:19.767321 containerd[1507]: time="2025-04-30T03:35:19.766907933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:35:19.767321 containerd[1507]: time="2025-04-30T03:35:19.767048427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:35:19.767321 containerd[1507]: time="2025-04-30T03:35:19.767067019Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:35:19.767929 containerd[1507]: time="2025-04-30T03:35:19.767106364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.767963 containerd[1507]: time="2025-04-30T03:35:19.767935256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.767963 containerd[1507]: time="2025-04-30T03:35:19.767949859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768001 containerd[1507]: time="2025-04-30T03:35:19.767969643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768118 containerd[1507]: time="2025-04-30T03:35:19.768092493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768118 containerd[1507]: time="2025-04-30T03:35:19.768116740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768165 containerd[1507]: time="2025-04-30T03:35:19.768130101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768165 containerd[1507]: time="2025-04-30T03:35:19.768144027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768279 containerd[1507]: time="2025-04-30T03:35:19.768260062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768305 containerd[1507]: time="2025-04-30T03:35:19.768285510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768335 containerd[1507]: time="2025-04-30T03:35:19.768303102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768357 containerd[1507]: time="2025-04-30T03:35:19.768350212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768375 containerd[1507]: time="2025-04-30T03:35:19.768366815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768395 containerd[1507]: time="2025-04-30T03:35:19.768384054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:35:19.768442 containerd[1507]: time="2025-04-30T03:35:19.768410461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768467 containerd[1507]: time="2025-04-30T03:35:19.768441859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768467 containerd[1507]: time="2025-04-30T03:35:19.768458765Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:35:19.768547 containerd[1507]: time="2025-04-30T03:35:19.768528232Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:35:19.768573 containerd[1507]: time="2025-04-30T03:35:19.768553752Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:35:19.768573 containerd[1507]: time="2025-04-30T03:35:19.768565719Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:35:19.768682 containerd[1507]: time="2025-04-30T03:35:19.768662132Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:35:19.768682 containerd[1507]: time="2025-04-30T03:35:19.768679351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.768721 containerd[1507]: time="2025-04-30T03:35:19.768692631Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:35:19.768721 containerd[1507]: time="2025-04-30T03:35:19.768706596Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:35:19.768721 containerd[1507]: time="2025-04-30T03:35:19.768715665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:35:19.769235 containerd[1507]: time="2025-04-30T03:35:19.769021728Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:35:19.769235 containerd[1507]: time="2025-04-30T03:35:19.769089621Z" level=info msg="Connect containerd service" Apr 30 03:35:19.769235 containerd[1507]: time="2025-04-30T03:35:19.769118564Z" level=info msg="using legacy CRI server" Apr 30 03:35:19.769235 containerd[1507]: time="2025-04-30T03:35:19.769138732Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:35:19.769398 containerd[1507]: time="2025-04-30T03:35:19.769236689Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:35:19.769951 containerd[1507]: time="2025-04-30T03:35:19.769924261Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:35:19.770114 containerd[1507]: time="2025-04-30T03:35:19.769987297Z" level=info msg="Start subscribing containerd event" Apr 30 03:35:19.770114 containerd[1507]: time="2025-04-30T03:35:19.770029338Z" level=info msg="Start recovering state" Apr 30 03:35:19.770114 containerd[1507]: time="2025-04-30T03:35:19.770079074Z" level=info msg="Start event monitor" Apr 30 03:35:19.770114 containerd[1507]: time="2025-04-30T03:35:19.770104533Z" level=info msg="Start snapshots syncer" Apr 30 03:35:19.770114 containerd[1507]: time="2025-04-30T03:35:19.770111946Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:35:19.770199 containerd[1507]: time="2025-04-30T03:35:19.770119823Z" level=info msg="Start streaming server" Apr 30 03:35:19.770630 containerd[1507]: time="2025-04-30T03:35:19.770521035Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:35:19.770660 containerd[1507]: time="2025-04-30T03:35:19.770640008Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:35:19.770768 containerd[1507]: time="2025-04-30T03:35:19.770693379Z" level=info msg="containerd successfully booted in 0.093716s" Apr 30 03:35:19.770929 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:35:20.008287 tar[1499]: linux-amd64/LICENSE Apr 30 03:35:20.008427 tar[1499]: linux-amd64/README.md Apr 30 03:35:20.017443 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:35:20.484342 systemd-networkd[1402]: eth1: Gained IPv6LL Apr 30 03:35:20.490505 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:35:20.492371 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:35:20.501965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:35:20.507795 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:35:20.547228 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:35:20.740491 systemd-networkd[1402]: eth0: Gained IPv6LL Apr 30 03:35:21.596429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:35:21.599424 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:35:21.605212 systemd[1]: Startup finished in 1.376s (kernel) + 6.903s (initrd) + 4.990s (userspace) = 13.271s. Apr 30 03:35:21.608992 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:35:22.644515 kubelet[1603]: E0430 03:35:22.644408 1603 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:35:22.646669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:35:22.646951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:35:22.647413 systemd[1]: kubelet.service: Consumed 1.568s CPU time. Apr 30 03:35:32.741641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:35:32.753031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:35:32.875893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:35:32.887843 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:35:32.935464 kubelet[1624]: E0430 03:35:32.935356 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:35:32.941669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:35:32.941858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:35:42.991604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:35:43.002941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:35:43.125745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:35:43.128594 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:35:43.168520 kubelet[1640]: E0430 03:35:43.168446 1640 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:35:43.170467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:35:43.170738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:35:53.241692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:35:53.252986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:35:53.390407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:35:53.393876 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:35:53.441603 kubelet[1656]: E0430 03:35:53.441520 1656 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:35:53.444569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:35:53.444837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:36:03.491931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 03:36:03.501115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:36:03.620743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:36:03.629839 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:36:03.674933 kubelet[1672]: E0430 03:36:03.674798 1672 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:36:03.677326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:36:03.677563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:36:04.378752 update_engine[1484]: I20250430 03:36:04.378546 1484 update_attempter.cc:509] Updating boot flags... Apr 30 03:36:04.431706 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1690) Apr 30 03:36:04.491683 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1693) Apr 30 03:36:13.741524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 03:36:13.748885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:36:13.897248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:36:13.908958 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:36:13.954741 kubelet[1707]: E0430 03:36:13.954658 1707 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:36:13.958883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:36:13.959036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:36:23.991429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 03:36:23.998000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:36:24.148051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:36:24.152303 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:36:24.187219 kubelet[1723]: E0430 03:36:24.187148 1723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:36:24.190384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:36:24.190509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:36:34.241948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 03:36:34.249009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:36:34.420788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:36:34.432045 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:36:34.503372 kubelet[1740]: E0430 03:36:34.503213 1740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:36:34.506685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:36:34.506940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:36:44.741338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 03:36:44.751936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:36:44.885065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:36:44.888209 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:36:44.927440 kubelet[1755]: E0430 03:36:44.927357 1755 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:36:44.931434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:36:44.931593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:36:54.991475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 03:36:55.000443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:36:55.140108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:36:55.144348 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:36:55.180361 kubelet[1773]: E0430 03:36:55.180296 1773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:36:55.183394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:36:55.183542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:03.075966 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:37:03.081210 systemd[1]: Started sshd@0-135.181.100.111:22-139.178.68.195:40606.service - OpenSSH per-connection server daemon (139.178.68.195:40606). Apr 30 03:37:04.084688 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 40606 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:04.088194 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:04.104550 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:37:04.114971 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:37:04.120731 systemd-logind[1483]: New session 1 of user core. Apr 30 03:37:04.137900 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:37:04.146012 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:37:04.162538 (systemd)[1786]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:37:04.318463 systemd[1786]: Queued start job for default target default.target. Apr 30 03:37:04.324667 systemd[1786]: Created slice app.slice - User Application Slice. Apr 30 03:37:04.324695 systemd[1786]: Reached target paths.target - Paths. Apr 30 03:37:04.324708 systemd[1786]: Reached target timers.target - Timers. Apr 30 03:37:04.326085 systemd[1786]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:37:04.350361 systemd[1786]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:37:04.350500 systemd[1786]: Reached target sockets.target - Sockets. Apr 30 03:37:04.350517 systemd[1786]: Reached target basic.target - Basic System. Apr 30 03:37:04.350567 systemd[1786]: Reached target default.target - Main User Target. Apr 30 03:37:04.350599 systemd[1786]: Startup finished in 178ms. Apr 30 03:37:04.350711 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:37:04.357772 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:37:05.047044 systemd[1]: Started sshd@1-135.181.100.111:22-139.178.68.195:40610.service - OpenSSH per-connection server daemon (139.178.68.195:40610). Apr 30 03:37:05.241392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 03:37:05.247256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:05.411585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:05.416693 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:05.467837 kubelet[1807]: E0430 03:37:05.467754 1807 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:05.471154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:05.471407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:06.036145 sshd[1797]: Accepted publickey for core from 139.178.68.195 port 40610 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:06.038542 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:06.046518 systemd-logind[1483]: New session 2 of user core. Apr 30 03:37:06.060840 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:37:06.714401 sshd[1797]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:06.718923 systemd[1]: sshd@1-135.181.100.111:22-139.178.68.195:40610.service: Deactivated successfully. Apr 30 03:37:06.722202 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:37:06.724670 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:37:06.726587 systemd-logind[1483]: Removed session 2. Apr 30 03:37:06.885299 systemd[1]: Started sshd@2-135.181.100.111:22-139.178.68.195:52228.service - OpenSSH per-connection server daemon (139.178.68.195:52228). Apr 30 03:37:07.873137 sshd[1820]: Accepted publickey for core from 139.178.68.195 port 52228 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:07.875488 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:07.880679 systemd-logind[1483]: New session 3 of user core. Apr 30 03:37:07.887768 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:37:08.544867 sshd[1820]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:08.548573 systemd[1]: sshd@2-135.181.100.111:22-139.178.68.195:52228.service: Deactivated successfully. Apr 30 03:37:08.551230 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:37:08.553455 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:37:08.555102 systemd-logind[1483]: Removed session 3. Apr 30 03:37:08.713266 systemd[1]: Started sshd@3-135.181.100.111:22-139.178.68.195:52230.service - OpenSSH per-connection server daemon (139.178.68.195:52230). Apr 30 03:37:09.706995 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 52230 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:09.709336 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:09.716720 systemd-logind[1483]: New session 4 of user core. Apr 30 03:37:09.724876 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:37:10.387062 sshd[1827]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:10.392209 systemd[1]: sshd@3-135.181.100.111:22-139.178.68.195:52230.service: Deactivated successfully. Apr 30 03:37:10.392759 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:37:10.394304 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:37:10.396478 systemd-logind[1483]: Removed session 4. Apr 30 03:37:10.569657 systemd[1]: Started sshd@4-135.181.100.111:22-139.178.68.195:52246.service - OpenSSH per-connection server daemon (139.178.68.195:52246). Apr 30 03:37:11.561858 sshd[1834]: Accepted publickey for core from 139.178.68.195 port 52246 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:11.564061 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:11.571953 systemd-logind[1483]: New session 5 of user core. Apr 30 03:37:11.582913 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:37:12.095548 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:37:12.096012 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:37:12.114686 sudo[1837]: pam_unix(sudo:session): session closed for user root Apr 30 03:37:12.274284 sshd[1834]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:12.279009 systemd[1]: sshd@4-135.181.100.111:22-139.178.68.195:52246.service: Deactivated successfully. Apr 30 03:37:12.282155 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:37:12.284275 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:37:12.286082 systemd-logind[1483]: Removed session 5. Apr 30 03:37:12.446979 systemd[1]: Started sshd@5-135.181.100.111:22-139.178.68.195:52250.service - OpenSSH per-connection server daemon (139.178.68.195:52250). Apr 30 03:37:13.441385 sshd[1842]: Accepted publickey for core from 139.178.68.195 port 52250 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:13.443649 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:13.450890 systemd-logind[1483]: New session 6 of user core. Apr 30 03:37:13.461848 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:37:13.965604 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:37:13.966273 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:37:13.972717 sudo[1846]: pam_unix(sudo:session): session closed for user root Apr 30 03:37:13.980291 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:37:13.980742 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:37:13.999018 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:37:14.003663 auditctl[1849]: No rules Apr 30 03:37:14.004003 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:37:14.004216 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:37:14.011063 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:37:14.057466 augenrules[1867]: No rules Apr 30 03:37:14.058376 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:37:14.059902 sudo[1845]: pam_unix(sudo:session): session closed for user root Apr 30 03:37:14.218829 sshd[1842]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:14.223283 systemd[1]: sshd@5-135.181.100.111:22-139.178.68.195:52250.service: Deactivated successfully. Apr 30 03:37:14.225941 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:37:14.227871 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:37:14.229794 systemd-logind[1483]: Removed session 6. Apr 30 03:37:14.391322 systemd[1]: Started sshd@6-135.181.100.111:22-139.178.68.195:52256.service - OpenSSH per-connection server daemon (139.178.68.195:52256). Apr 30 03:37:15.380467 sshd[1875]: Accepted publickey for core from 139.178.68.195 port 52256 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:15.383031 sshd[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:15.390235 systemd-logind[1483]: New session 7 of user core. Apr 30 03:37:15.405952 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:37:15.491513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 03:37:15.497186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:15.643695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:15.646348 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:15.679786 kubelet[1886]: E0430 03:37:15.679733 1886 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:15.683451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:15.683586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:15.901046 sudo[1894]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:37:15.901535 sudo[1894]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:37:16.361111 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:37:16.361130 (dockerd)[1909]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:37:16.759489 dockerd[1909]: time="2025-04-30T03:37:16.759437267Z" level=info msg="Starting up" Apr 30 03:37:16.928600 dockerd[1909]: time="2025-04-30T03:37:16.928503432Z" level=info msg="Loading containers: start." Apr 30 03:37:17.067665 kernel: Initializing XFRM netlink socket Apr 30 03:37:17.145651 systemd-networkd[1402]: docker0: Link UP Apr 30 03:37:17.166512 dockerd[1909]: time="2025-04-30T03:37:17.166438896Z" level=info msg="Loading containers: done." Apr 30 03:37:17.187016 dockerd[1909]: time="2025-04-30T03:37:17.186940086Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:37:17.187241 dockerd[1909]: time="2025-04-30T03:37:17.187064188Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:37:17.187241 dockerd[1909]: time="2025-04-30T03:37:17.187182219Z" level=info msg="Daemon has completed initialization" Apr 30 03:37:17.235411 dockerd[1909]: time="2025-04-30T03:37:17.235293669Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:37:17.235916 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:37:18.656308 containerd[1507]: time="2025-04-30T03:37:18.656235488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:37:19.345952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646910733.mount: Deactivated successfully. Apr 30 03:37:21.340338 containerd[1507]: time="2025-04-30T03:37:21.340257757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:21.341679 containerd[1507]: time="2025-04-30T03:37:21.341633683Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674967" Apr 30 03:37:21.343053 containerd[1507]: time="2025-04-30T03:37:21.343009879Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:21.346163 containerd[1507]: time="2025-04-30T03:37:21.346083431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:21.347051 containerd[1507]: time="2025-04-30T03:37:21.346891173Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.690610371s" Apr 30 03:37:21.347051 containerd[1507]: time="2025-04-30T03:37:21.346922161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:37:21.370460 containerd[1507]: time="2025-04-30T03:37:21.370398923Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:37:23.163925 containerd[1507]: time="2025-04-30T03:37:23.163848077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:23.165033 containerd[1507]: time="2025-04-30T03:37:23.164988753Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617556" Apr 30 03:37:23.166183 containerd[1507]: time="2025-04-30T03:37:23.166149045Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:23.168898 containerd[1507]: time="2025-04-30T03:37:23.168843300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:23.169664 containerd[1507]: time="2025-04-30T03:37:23.169535436Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.799090126s" Apr 30 03:37:23.169664 containerd[1507]: time="2025-04-30T03:37:23.169562937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:37:23.187414 containerd[1507]: time="2025-04-30T03:37:23.187373862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:37:24.754024 containerd[1507]: time="2025-04-30T03:37:24.753964859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:24.755319 containerd[1507]: time="2025-04-30T03:37:24.755271405Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903704" Apr 30 03:37:24.756459 containerd[1507]: time="2025-04-30T03:37:24.756392614Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:24.759992 containerd[1507]: time="2025-04-30T03:37:24.759936229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:24.761433 containerd[1507]: time="2025-04-30T03:37:24.761225022Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.573811395s" Apr 30 03:37:24.761433 containerd[1507]: time="2025-04-30T03:37:24.761279424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:37:24.781556 containerd[1507]: time="2025-04-30T03:37:24.781501709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:37:25.741116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 03:37:25.746742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:25.773880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2020926498.mount: Deactivated successfully. Apr 30 03:37:25.843846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:25.845839 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:25.882602 kubelet[2146]: E0430 03:37:25.882332 2146 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:25.884669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:25.884786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:26.147276 containerd[1507]: time="2025-04-30T03:37:26.147074959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:26.148558 containerd[1507]: time="2025-04-30T03:37:26.148488206Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185845" Apr 30 03:37:26.149999 containerd[1507]: time="2025-04-30T03:37:26.149905931Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:26.152389 containerd[1507]: time="2025-04-30T03:37:26.152334590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:26.153735 containerd[1507]: time="2025-04-30T03:37:26.153174473Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.371630204s" Apr 30 03:37:26.153735 containerd[1507]: time="2025-04-30T03:37:26.153215430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:37:26.177399 containerd[1507]: time="2025-04-30T03:37:26.177359598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:37:26.720494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002191164.mount: Deactivated successfully. Apr 30 03:37:27.414073 containerd[1507]: time="2025-04-30T03:37:27.414003932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:27.415287 containerd[1507]: time="2025-04-30T03:37:27.415232243Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Apr 30 03:37:27.416713 containerd[1507]: time="2025-04-30T03:37:27.416672741Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:27.419514 containerd[1507]: time="2025-04-30T03:37:27.419480500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:27.426449 containerd[1507]: time="2025-04-30T03:37:27.420827202Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.243431536s" Apr 30 03:37:27.426449 containerd[1507]: time="2025-04-30T03:37:27.420865995Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:37:27.440419 containerd[1507]: time="2025-04-30T03:37:27.440358786Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:37:27.914019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922910994.mount: Deactivated successfully. Apr 30 03:37:27.923023 containerd[1507]: time="2025-04-30T03:37:27.922935443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:27.924547 containerd[1507]: time="2025-04-30T03:37:27.924470287Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322312" Apr 30 03:37:27.926147 containerd[1507]: time="2025-04-30T03:37:27.926072889Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:27.930902 containerd[1507]: time="2025-04-30T03:37:27.930814310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:27.932509 containerd[1507]: time="2025-04-30T03:37:27.932127670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 491.543932ms" Apr 30 03:37:27.932509 containerd[1507]: time="2025-04-30T03:37:27.932177233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:37:27.968099 containerd[1507]: time="2025-04-30T03:37:27.968039867Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:37:28.511100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747851325.mount: Deactivated successfully. Apr 30 03:37:30.142706 containerd[1507]: time="2025-04-30T03:37:30.142643473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:30.143827 containerd[1507]: time="2025-04-30T03:37:30.143780163Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238653" Apr 30 03:37:30.145090 containerd[1507]: time="2025-04-30T03:37:30.145071673Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:30.148172 containerd[1507]: time="2025-04-30T03:37:30.148115316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:37:30.149303 containerd[1507]: time="2025-04-30T03:37:30.149101023Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.181005812s" Apr 30 03:37:30.149303 containerd[1507]: time="2025-04-30T03:37:30.149131820Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:37:33.377637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:33.392073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:33.419887 systemd[1]: Reloading requested from client PID 2326 ('systemctl') (unit session-7.scope)... Apr 30 03:37:33.419904 systemd[1]: Reloading... Apr 30 03:37:33.533643 zram_generator::config[2369]: No configuration found. Apr 30 03:37:33.627545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:37:33.704761 systemd[1]: Reloading finished in 284 ms. Apr 30 03:37:33.745557 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:37:33.745631 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:37:33.745820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:33.752233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:33.850680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:33.854812 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:37:33.898269 kubelet[2418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:37:33.898603 kubelet[2418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:37:33.898664 kubelet[2418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:37:33.898806 kubelet[2418]: I0430 03:37:33.898771 2418 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:37:34.142839 kubelet[2418]: I0430 03:37:34.142785 2418 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:37:34.142839 kubelet[2418]: I0430 03:37:34.142830 2418 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:37:34.143205 kubelet[2418]: I0430 03:37:34.143178 2418 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:37:34.170453 kubelet[2418]: I0430 03:37:34.169927 2418 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:37:34.170453 kubelet[2418]: E0430 03:37:34.170332 2418 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://135.181.100.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.192699 kubelet[2418]: I0430 03:37:34.192664 2418 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:37:34.197979 kubelet[2418]: I0430 03:37:34.197913 2418 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:37:34.198264 kubelet[2418]: I0430 03:37:34.197961 2418 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-9-5ae3ade3a2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:37:34.198387 kubelet[2418]: I0430 03:37:34.198264 2418 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:37:34.198387 kubelet[2418]: I0430 03:37:34.198283 2418 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:37:34.198507 kubelet[2418]: I0430 03:37:34.198469 2418 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:37:34.199882 kubelet[2418]: I0430 03:37:34.199848 2418 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:37:34.199882 kubelet[2418]: I0430 03:37:34.199878 2418 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:37:34.199986 kubelet[2418]: I0430 03:37:34.199909 2418 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:37:34.199986 kubelet[2418]: I0430 03:37:34.199938 2418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:37:34.207033 kubelet[2418]: W0430 03:37:34.206038 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://135.181.100.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.207033 kubelet[2418]: E0430 03:37:34.206140 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://135.181.100.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.207033 kubelet[2418]: W0430 03:37:34.206505 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://135.181.100.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-9-5ae3ade3a2&limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.207033 kubelet[2418]: E0430 03:37:34.206561 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://135.181.100.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-9-5ae3ade3a2&limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.207186 kubelet[2418]: I0430 03:37:34.207038 2418 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:37:34.210004 kubelet[2418]: I0430 03:37:34.209980 2418 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:37:34.210145 kubelet[2418]: W0430 03:37:34.210133 2418 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:37:34.211230 kubelet[2418]: I0430 03:37:34.211213 2418 server.go:1264] "Started kubelet" Apr 30 03:37:34.213152 kubelet[2418]: I0430 03:37:34.213019 2418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:37:34.221534 kubelet[2418]: I0430 03:37:34.221506 2418 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:37:34.222641 kubelet[2418]: I0430 03:37:34.222601 2418 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:37:34.223654 kubelet[2418]: I0430 03:37:34.223580 2418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:37:34.224092 kubelet[2418]: I0430 03:37:34.223883 2418 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:37:34.225388 kubelet[2418]: I0430 03:37:34.225022 2418 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:37:34.233387 kubelet[2418]: I0430 03:37:34.232202 2418 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:37:34.233387 kubelet[2418]: I0430 03:37:34.232217 2418 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:37:34.233387 kubelet[2418]: I0430 03:37:34.232278 2418 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:37:34.233387 kubelet[2418]: I0430 03:37:34.232316 2418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:37:34.233387 kubelet[2418]: E0430 03:37:34.232555 2418 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.100.111:6443/api/v1/namespaces/default/events\": dial tcp 135.181.100.111:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-9-5ae3ade3a2.183afb6db7aee0f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-9-5ae3ade3a2,UID:ci-4081-3-3-9-5ae3ade3a2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-9-5ae3ade3a2,},FirstTimestamp:2025-04-30 03:37:34.211186935 +0000 UTC m=+0.353450316,LastTimestamp:2025-04-30 03:37:34.211186935 +0000 UTC m=+0.353450316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-9-5ae3ade3a2,}" Apr 30 03:37:34.233387 kubelet[2418]: E0430 03:37:34.232724 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.100.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-9-5ae3ade3a2?timeout=10s\": dial tcp 135.181.100.111:6443: connect: connection refused" interval="200ms" Apr 30 03:37:34.233387 kubelet[2418]: W0430 03:37:34.233281 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://135.181.100.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.233696 kubelet[2418]: E0430 03:37:34.233341 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://135.181.100.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.233943 kubelet[2418]: E0430 03:37:34.233890 2418 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:37:34.235126 kubelet[2418]: I0430 03:37:34.235097 2418 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:37:34.243853 kubelet[2418]: I0430 03:37:34.243728 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:37:34.244622 kubelet[2418]: I0430 03:37:34.244584 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:37:34.244669 kubelet[2418]: I0430 03:37:34.244602 2418 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:37:34.245417 kubelet[2418]: I0430 03:37:34.244715 2418 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:37:34.245417 kubelet[2418]: E0430 03:37:34.244754 2418 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:37:34.253563 kubelet[2418]: W0430 03:37:34.253452 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://135.181.100.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.253563 kubelet[2418]: E0430 03:37:34.253500 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://135.181.100.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:34.261037 kubelet[2418]: I0430 03:37:34.260993 2418 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:37:34.261037 kubelet[2418]: I0430 03:37:34.261006 2418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:37:34.261189 kubelet[2418]: I0430 03:37:34.261084 2418 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:37:34.263938 kubelet[2418]: I0430 03:37:34.263903 2418 policy_none.go:49] "None policy: Start" Apr 30 03:37:34.264415 kubelet[2418]: I0430 03:37:34.264392 2418 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:37:34.264415 kubelet[2418]: I0430 03:37:34.264416 2418 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:37:34.269632 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:37:34.286226 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:37:34.290127 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:37:34.302399 kubelet[2418]: I0430 03:37:34.302379 2418 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:37:34.303290 kubelet[2418]: I0430 03:37:34.302933 2418 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:37:34.303290 kubelet[2418]: I0430 03:37:34.303050 2418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:37:34.305293 kubelet[2418]: E0430 03:37:34.305255 2418 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-9-5ae3ade3a2\" not found" Apr 30 03:37:34.327578 kubelet[2418]: I0430 03:37:34.327539 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.327997 kubelet[2418]: E0430 03:37:34.327945 2418 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://135.181.100.111:6443/api/v1/nodes\": dial tcp 135.181.100.111:6443: connect: connection refused" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.345725 kubelet[2418]: I0430 03:37:34.345672 2418 topology_manager.go:215] "Topology Admit Handler" podUID="56114d580898b5678668a645dd851985" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.348300 kubelet[2418]: I0430 03:37:34.348053 2418 topology_manager.go:215] "Topology Admit Handler" podUID="619419732145269b3097bbc25fa089f2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.350135 kubelet[2418]: I0430 03:37:34.349794 2418 topology_manager.go:215] "Topology Admit Handler" podUID="4249a6206253d611720ded071f1747cc" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.357397 systemd[1]: Created slice kubepods-burstable-pod56114d580898b5678668a645dd851985.slice - libcontainer container kubepods-burstable-pod56114d580898b5678668a645dd851985.slice. Apr 30 03:37:34.373683 systemd[1]: Created slice kubepods-burstable-pod619419732145269b3097bbc25fa089f2.slice - libcontainer container kubepods-burstable-pod619419732145269b3097bbc25fa089f2.slice. Apr 30 03:37:34.379881 systemd[1]: Created slice kubepods-burstable-pod4249a6206253d611720ded071f1747cc.slice - libcontainer container kubepods-burstable-pod4249a6206253d611720ded071f1747cc.slice. Apr 30 03:37:34.433865 kubelet[2418]: E0430 03:37:34.433722 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.100.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-9-5ae3ade3a2?timeout=10s\": dial tcp 135.181.100.111:6443: connect: connection refused" interval="400ms" Apr 30 03:37:34.532082 kubelet[2418]: I0430 03:37:34.531998 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.532671 kubelet[2418]: E0430 03:37:34.532573 2418 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://135.181.100.111:6443/api/v1/nodes\": dial tcp 135.181.100.111:6443: connect: connection refused" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.533896 kubelet[2418]: I0430 03:37:34.533849 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56114d580898b5678668a645dd851985-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"56114d580898b5678668a645dd851985\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.533896 kubelet[2418]: I0430 03:37:34.533903 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4249a6206253d611720ded071f1747cc-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"4249a6206253d611720ded071f1747cc\") " pod="kube-system/kube-scheduler-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.534258 kubelet[2418]: I0430 03:37:34.533931 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56114d580898b5678668a645dd851985-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"56114d580898b5678668a645dd851985\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.534258 kubelet[2418]: I0430 03:37:34.533961 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56114d580898b5678668a645dd851985-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"56114d580898b5678668a645dd851985\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.534258 kubelet[2418]: I0430 03:37:34.533990 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.534258 kubelet[2418]: I0430 03:37:34.534018 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.534258 kubelet[2418]: I0430 03:37:34.534049 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.534497 kubelet[2418]: I0430 03:37:34.534078 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.534497 kubelet[2418]: I0430 03:37:34.534106 2418 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.671237 containerd[1507]: time="2025-04-30T03:37:34.671136074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-9-5ae3ade3a2,Uid:56114d580898b5678668a645dd851985,Namespace:kube-system,Attempt:0,}" Apr 30 03:37:34.678165 containerd[1507]: time="2025-04-30T03:37:34.677850690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2,Uid:619419732145269b3097bbc25fa089f2,Namespace:kube-system,Attempt:0,}" Apr 30 03:37:34.683411 containerd[1507]: time="2025-04-30T03:37:34.683360667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-9-5ae3ade3a2,Uid:4249a6206253d611720ded071f1747cc,Namespace:kube-system,Attempt:0,}" Apr 30 03:37:34.834659 kubelet[2418]: E0430 03:37:34.834535 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.100.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-9-5ae3ade3a2?timeout=10s\": dial tcp 135.181.100.111:6443: connect: connection refused" interval="800ms" Apr 30 03:37:34.935776 kubelet[2418]: I0430 03:37:34.935718 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:34.936415 kubelet[2418]: E0430 03:37:34.936102 2418 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://135.181.100.111:6443/api/v1/nodes\": dial tcp 135.181.100.111:6443: connect: connection refused" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:35.060427 kubelet[2418]: W0430 03:37:35.060356 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://135.181.100.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:35.060427 kubelet[2418]: E0430 03:37:35.060419 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://135.181.100.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:35.134996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989596357.mount: Deactivated successfully. Apr 30 03:37:35.145142 containerd[1507]: time="2025-04-30T03:37:35.145073720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:37:35.146565 containerd[1507]: time="2025-04-30T03:37:35.146517777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:37:35.148115 containerd[1507]: time="2025-04-30T03:37:35.148038428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:37:35.148401 containerd[1507]: time="2025-04-30T03:37:35.148366903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:37:35.148488 containerd[1507]: time="2025-04-30T03:37:35.148454897Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:37:35.149715 containerd[1507]: time="2025-04-30T03:37:35.149599784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 30 03:37:35.150338 containerd[1507]: time="2025-04-30T03:37:35.150245444Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:37:35.156648 containerd[1507]: time="2025-04-30T03:37:35.156566523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:37:35.158660 containerd[1507]: time="2025-04-30T03:37:35.158464581Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.196931ms" Apr 30 03:37:35.164353 containerd[1507]: time="2025-04-30T03:37:35.164208118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.273371ms" Apr 30 03:37:35.169471 containerd[1507]: time="2025-04-30T03:37:35.169293651Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.678046ms" Apr 30 03:37:35.293085 kubelet[2418]: W0430 03:37:35.292981 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://135.181.100.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-9-5ae3ade3a2&limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:35.293085 kubelet[2418]: E0430 03:37:35.293044 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://135.181.100.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-9-5ae3ade3a2&limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:35.346905 containerd[1507]: time="2025-04-30T03:37:35.346747926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:37:35.347102 containerd[1507]: time="2025-04-30T03:37:35.346939235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:37:35.347136 containerd[1507]: time="2025-04-30T03:37:35.347050343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:35.347696 containerd[1507]: time="2025-04-30T03:37:35.347389529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:35.356487 kubelet[2418]: W0430 03:37:35.356404 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://135.181.100.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:35.356632 kubelet[2418]: E0430 03:37:35.356516 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://135.181.100.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:35.357509 containerd[1507]: time="2025-04-30T03:37:35.357130629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:37:35.357509 containerd[1507]: time="2025-04-30T03:37:35.357180151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:37:35.357509 containerd[1507]: time="2025-04-30T03:37:35.357190431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:35.357509 containerd[1507]: time="2025-04-30T03:37:35.357249852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:35.357687 containerd[1507]: time="2025-04-30T03:37:35.357436372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:37:35.357687 containerd[1507]: time="2025-04-30T03:37:35.357511202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:37:35.357687 containerd[1507]: time="2025-04-30T03:37:35.357547149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:35.357904 containerd[1507]: time="2025-04-30T03:37:35.357646215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:35.382015 systemd[1]: Started cri-containerd-fb9d5bec967f23c0b2a5f214a3a0f00182218747014d7ac6bb936591bb1a3208.scope - libcontainer container fb9d5bec967f23c0b2a5f214a3a0f00182218747014d7ac6bb936591bb1a3208. Apr 30 03:37:35.392399 systemd[1]: Started cri-containerd-a0b9c1a32aaf0ddabec60cac7e90c34361577771f36b00a22388eeea11c157fe.scope - libcontainer container a0b9c1a32aaf0ddabec60cac7e90c34361577771f36b00a22388eeea11c157fe. Apr 30 03:37:35.394657 systemd[1]: Started cri-containerd-a9769a6bd46b8f324660fc27e3840ede8e9fbbe90711c10f960f91daf4d062a2.scope - libcontainer container a9769a6bd46b8f324660fc27e3840ede8e9fbbe90711c10f960f91daf4d062a2. Apr 30 03:37:35.438905 containerd[1507]: time="2025-04-30T03:37:35.438520518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2,Uid:619419732145269b3097bbc25fa089f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb9d5bec967f23c0b2a5f214a3a0f00182218747014d7ac6bb936591bb1a3208\"" Apr 30 03:37:35.446110 containerd[1507]: time="2025-04-30T03:37:35.445930017Z" level=info msg="CreateContainer within sandbox \"fb9d5bec967f23c0b2a5f214a3a0f00182218747014d7ac6bb936591bb1a3208\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:37:35.459755 containerd[1507]: time="2025-04-30T03:37:35.459721991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-9-5ae3ade3a2,Uid:56114d580898b5678668a645dd851985,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9769a6bd46b8f324660fc27e3840ede8e9fbbe90711c10f960f91daf4d062a2\"" Apr 30 03:37:35.463002 containerd[1507]: time="2025-04-30T03:37:35.462897322Z" level=info msg="CreateContainer within sandbox \"a9769a6bd46b8f324660fc27e3840ede8e9fbbe90711c10f960f91daf4d062a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:37:35.464812 containerd[1507]: time="2025-04-30T03:37:35.464793346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-9-5ae3ade3a2,Uid:4249a6206253d611720ded071f1747cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0b9c1a32aaf0ddabec60cac7e90c34361577771f36b00a22388eeea11c157fe\"" Apr 30 03:37:35.467291 containerd[1507]: time="2025-04-30T03:37:35.467264470Z" level=info msg="CreateContainer within sandbox \"a0b9c1a32aaf0ddabec60cac7e90c34361577771f36b00a22388eeea11c157fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:37:35.470897 containerd[1507]: time="2025-04-30T03:37:35.470874806Z" level=info msg="CreateContainer within sandbox \"fb9d5bec967f23c0b2a5f214a3a0f00182218747014d7ac6bb936591bb1a3208\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e\"" Apr 30 03:37:35.471631 containerd[1507]: time="2025-04-30T03:37:35.471594276Z" level=info msg="StartContainer for \"e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e\"" Apr 30 03:37:35.480640 containerd[1507]: time="2025-04-30T03:37:35.480272413Z" level=info msg="CreateContainer within sandbox \"a9769a6bd46b8f324660fc27e3840ede8e9fbbe90711c10f960f91daf4d062a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b14d492d07cec2ac576d44ab12e81818a9d995eb1456dc6083de2ab062cd527a\"" Apr 30 03:37:35.481137 containerd[1507]: time="2025-04-30T03:37:35.481122776Z" level=info msg="StartContainer for \"b14d492d07cec2ac576d44ab12e81818a9d995eb1456dc6083de2ab062cd527a\"" Apr 30 03:37:35.489083 containerd[1507]: time="2025-04-30T03:37:35.489041521Z" level=info msg="CreateContainer within sandbox \"a0b9c1a32aaf0ddabec60cac7e90c34361577771f36b00a22388eeea11c157fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431\"" Apr 30 03:37:35.489582 containerd[1507]: time="2025-04-30T03:37:35.489567597Z" level=info msg="StartContainer for \"bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431\"" Apr 30 03:37:35.500805 systemd[1]: Started cri-containerd-e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e.scope - libcontainer container e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e. Apr 30 03:37:35.510537 systemd[1]: Started cri-containerd-b14d492d07cec2ac576d44ab12e81818a9d995eb1456dc6083de2ab062cd527a.scope - libcontainer container b14d492d07cec2ac576d44ab12e81818a9d995eb1456dc6083de2ab062cd527a. Apr 30 03:37:35.523686 systemd[1]: Started cri-containerd-bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431.scope - libcontainer container bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431. Apr 30 03:37:35.554045 containerd[1507]: time="2025-04-30T03:37:35.554011760Z" level=info msg="StartContainer for \"e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e\" returns successfully" Apr 30 03:37:35.574272 containerd[1507]: time="2025-04-30T03:37:35.574191538Z" level=info msg="StartContainer for \"b14d492d07cec2ac576d44ab12e81818a9d995eb1456dc6083de2ab062cd527a\" returns successfully" Apr 30 03:37:35.579623 containerd[1507]: time="2025-04-30T03:37:35.579586020Z" level=info msg="StartContainer for \"bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431\" returns successfully" Apr 30 03:37:35.636775 kubelet[2418]: E0430 03:37:35.636722 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.100.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-9-5ae3ade3a2?timeout=10s\": dial tcp 135.181.100.111:6443: connect: connection refused" interval="1.6s" Apr 30 03:37:35.738465 kubelet[2418]: I0430 03:37:35.738373 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:35.738972 kubelet[2418]: E0430 03:37:35.738641 2418 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://135.181.100.111:6443/api/v1/nodes\": dial tcp 135.181.100.111:6443: connect: connection refused" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:35.763090 kubelet[2418]: W0430 03:37:35.763045 2418 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://135.181.100.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:35.763143 kubelet[2418]: E0430 03:37:35.763098 2418 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://135.181.100.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 135.181.100.111:6443: connect: connection refused Apr 30 03:37:37.207004 kubelet[2418]: I0430 03:37:37.206762 2418 apiserver.go:52] "Watching apiserver" Apr 30 03:37:37.233109 kubelet[2418]: I0430 03:37:37.233024 2418 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:37:37.241513 kubelet[2418]: E0430 03:37:37.241457 2418 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-9-5ae3ade3a2\" not found" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:37.342184 kubelet[2418]: I0430 03:37:37.342114 2418 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:37.358884 kubelet[2418]: I0430 03:37:37.358344 2418 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:39.328470 systemd[1]: Reloading requested from client PID 2691 ('systemctl') (unit session-7.scope)... Apr 30 03:37:39.328496 systemd[1]: Reloading... Apr 30 03:37:39.450660 zram_generator::config[2728]: No configuration found. Apr 30 03:37:39.564943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:37:39.652859 systemd[1]: Reloading finished in 323 ms. Apr 30 03:37:39.687450 kubelet[2418]: E0430 03:37:39.687259 2418 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-3-3-9-5ae3ade3a2.183afb6db7aee0f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-9-5ae3ade3a2,UID:ci-4081-3-3-9-5ae3ade3a2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-9-5ae3ade3a2,},FirstTimestamp:2025-04-30 03:37:34.211186935 +0000 UTC m=+0.353450316,LastTimestamp:2025-04-30 03:37:34.211186935 +0000 UTC m=+0.353450316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-9-5ae3ade3a2,}" Apr 30 03:37:39.687450 kubelet[2418]: I0430 03:37:39.687362 2418 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:37:39.687811 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:39.696660 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:37:39.696880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:39.701212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:39.814535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:39.824167 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:37:39.879007 kubelet[2782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:37:39.879007 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:37:39.879007 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:37:39.880660 kubelet[2782]: I0430 03:37:39.879513 2782 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:37:39.884245 kubelet[2782]: I0430 03:37:39.884205 2782 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:37:39.884245 kubelet[2782]: I0430 03:37:39.884234 2782 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:37:39.884449 kubelet[2782]: I0430 03:37:39.884429 2782 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:37:39.885654 kubelet[2782]: I0430 03:37:39.885630 2782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:37:39.887123 kubelet[2782]: I0430 03:37:39.886652 2782 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:37:39.897445 kubelet[2782]: I0430 03:37:39.897410 2782 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:37:39.897676 kubelet[2782]: I0430 03:37:39.897636 2782 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:37:39.897874 kubelet[2782]: I0430 03:37:39.897670 2782 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-9-5ae3ade3a2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:37:39.897956 kubelet[2782]: I0430 03:37:39.897879 2782 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:37:39.897956 kubelet[2782]: I0430 03:37:39.897888 2782 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:37:39.897956 kubelet[2782]: I0430 03:37:39.897922 2782 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:37:39.898012 kubelet[2782]: I0430 03:37:39.898000 2782 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:37:39.898012 kubelet[2782]: I0430 03:37:39.898011 2782 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:37:39.898052 kubelet[2782]: I0430 03:37:39.898029 2782 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:37:39.898052 kubelet[2782]: I0430 03:37:39.898043 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:37:39.902360 kubelet[2782]: I0430 03:37:39.902275 2782 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:37:39.902896 kubelet[2782]: I0430 03:37:39.902455 2782 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:37:39.906965 kubelet[2782]: I0430 03:37:39.906072 2782 server.go:1264] "Started kubelet" Apr 30 03:37:39.910803 kubelet[2782]: I0430 03:37:39.910786 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:37:39.915078 kubelet[2782]: E0430 03:37:39.915057 2782 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:37:39.918651 kubelet[2782]: I0430 03:37:39.917593 2782 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:37:39.918651 kubelet[2782]: I0430 03:37:39.918074 2782 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:37:39.918651 kubelet[2782]: I0430 03:37:39.918393 2782 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:37:39.919286 kubelet[2782]: I0430 03:37:39.919244 2782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:37:39.919504 kubelet[2782]: I0430 03:37:39.919493 2782 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:37:39.919562 kubelet[2782]: I0430 03:37:39.919287 2782 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:37:39.920496 kubelet[2782]: I0430 03:37:39.920471 2782 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:37:39.921303 kubelet[2782]: I0430 03:37:39.920799 2782 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:37:39.921303 kubelet[2782]: I0430 03:37:39.920939 2782 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:37:39.926913 kubelet[2782]: I0430 03:37:39.926881 2782 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:37:39.929416 kubelet[2782]: I0430 03:37:39.929376 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:37:39.931189 kubelet[2782]: I0430 03:37:39.931162 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:37:39.931237 kubelet[2782]: I0430 03:37:39.931194 2782 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:37:39.931237 kubelet[2782]: I0430 03:37:39.931211 2782 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:37:39.931275 kubelet[2782]: E0430 03:37:39.931247 2782 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:37:39.976406 kubelet[2782]: I0430 03:37:39.976370 2782 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:37:39.976406 kubelet[2782]: I0430 03:37:39.976391 2782 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:37:39.976406 kubelet[2782]: I0430 03:37:39.976408 2782 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:37:39.976667 kubelet[2782]: I0430 03:37:39.976542 2782 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:37:39.976667 kubelet[2782]: I0430 03:37:39.976550 2782 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:37:39.976667 kubelet[2782]: I0430 03:37:39.976568 2782 policy_none.go:49] "None policy: Start" Apr 30 03:37:39.977335 kubelet[2782]: I0430 03:37:39.977307 2782 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:37:39.977335 kubelet[2782]: I0430 03:37:39.977327 2782 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:37:39.977459 kubelet[2782]: I0430 03:37:39.977434 2782 state_mem.go:75] "Updated machine memory state" Apr 30 03:37:39.983166 kubelet[2782]: I0430 03:37:39.983141 2782 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:37:39.983326 kubelet[2782]: I0430 03:37:39.983279 2782 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:37:39.983388 kubelet[2782]: I0430 03:37:39.983368 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:37:40.024555 kubelet[2782]: I0430 03:37:40.024527 2782 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.032094 kubelet[2782]: I0430 03:37:40.032011 2782 topology_manager.go:215] "Topology Admit Handler" podUID="56114d580898b5678668a645dd851985" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.032219 kubelet[2782]: I0430 03:37:40.032128 2782 topology_manager.go:215] "Topology Admit Handler" podUID="619419732145269b3097bbc25fa089f2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.032219 kubelet[2782]: I0430 03:37:40.032197 2782 topology_manager.go:215] "Topology Admit Handler" podUID="4249a6206253d611720ded071f1747cc" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.042443 kubelet[2782]: I0430 03:37:40.041602 2782 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.042443 kubelet[2782]: I0430 03:37:40.041699 2782 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222239 kubelet[2782]: I0430 03:37:40.222053 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222239 kubelet[2782]: I0430 03:37:40.222189 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222466 kubelet[2782]: I0430 03:37:40.222305 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56114d580898b5678668a645dd851985-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"56114d580898b5678668a645dd851985\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222466 kubelet[2782]: I0430 03:37:40.222350 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56114d580898b5678668a645dd851985-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"56114d580898b5678668a645dd851985\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222466 kubelet[2782]: I0430 03:37:40.222403 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222466 kubelet[2782]: I0430 03:37:40.222432 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222679 kubelet[2782]: I0430 03:37:40.222471 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/619419732145269b3097bbc25fa089f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"619419732145269b3097bbc25fa089f2\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222679 kubelet[2782]: I0430 03:37:40.222503 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4249a6206253d611720ded071f1747cc-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"4249a6206253d611720ded071f1747cc\") " pod="kube-system/kube-scheduler-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.222679 kubelet[2782]: I0430 03:37:40.222530 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56114d580898b5678668a645dd851985-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-9-5ae3ade3a2\" (UID: \"56114d580898b5678668a645dd851985\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:37:40.899349 kubelet[2782]: I0430 03:37:40.899246 2782 apiserver.go:52] "Watching apiserver" Apr 30 03:37:40.920692 kubelet[2782]: I0430 03:37:40.920587 2782 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:37:41.006583 kubelet[2782]: I0430 03:37:41.006477 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-9-5ae3ade3a2" podStartSLOduration=1.00645756 podStartE2EDuration="1.00645756s" podCreationTimestamp="2025-04-30 03:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:37:40.999375358 +0000 UTC m=+1.169907010" watchObservedRunningTime="2025-04-30 03:37:41.00645756 +0000 UTC m=+1.176989201" Apr 30 03:37:41.014738 kubelet[2782]: I0430 03:37:41.014589 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-9-5ae3ade3a2" podStartSLOduration=1.014575605 podStartE2EDuration="1.014575605s" podCreationTimestamp="2025-04-30 03:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:37:41.006908145 +0000 UTC m=+1.177439786" watchObservedRunningTime="2025-04-30 03:37:41.014575605 +0000 UTC m=+1.185107246" Apr 30 03:37:45.546324 sudo[1894]: pam_unix(sudo:session): session closed for user root Apr 30 03:37:45.705894 sshd[1875]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:45.710971 systemd[1]: sshd@6-135.181.100.111:22-139.178.68.195:52256.service: Deactivated successfully. Apr 30 03:37:45.713492 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:37:45.713874 systemd[1]: session-7.scope: Consumed 5.631s CPU time, 189.3M memory peak, 0B memory swap peak. Apr 30 03:37:45.714861 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:37:45.716601 systemd-logind[1483]: Removed session 7. Apr 30 03:37:47.843727 kubelet[2782]: I0430 03:37:47.843380 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-9-5ae3ade3a2" podStartSLOduration=7.843354188 podStartE2EDuration="7.843354188s" podCreationTimestamp="2025-04-30 03:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:37:41.014769589 +0000 UTC m=+1.185301241" watchObservedRunningTime="2025-04-30 03:37:47.843354188 +0000 UTC m=+8.013885870" Apr 30 03:37:55.030240 kubelet[2782]: I0430 03:37:55.030096 2782 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:37:55.032775 containerd[1507]: time="2025-04-30T03:37:55.030790344Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:37:55.032961 kubelet[2782]: I0430 03:37:55.032554 2782 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:37:55.098079 kubelet[2782]: I0430 03:37:55.098034 2782 topology_manager.go:215] "Topology Admit Handler" podUID="949f0373-8fc8-4730-927b-7b5ade2aa2a2" podNamespace="kube-system" podName="kube-proxy-k8rhl" Apr 30 03:37:55.105989 systemd[1]: Created slice kubepods-besteffort-pod949f0373_8fc8_4730_927b_7b5ade2aa2a2.slice - libcontainer container kubepods-besteffort-pod949f0373_8fc8_4730_927b_7b5ade2aa2a2.slice. Apr 30 03:37:55.130074 kubelet[2782]: I0430 03:37:55.129936 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/949f0373-8fc8-4730-927b-7b5ade2aa2a2-kube-proxy\") pod \"kube-proxy-k8rhl\" (UID: \"949f0373-8fc8-4730-927b-7b5ade2aa2a2\") " pod="kube-system/kube-proxy-k8rhl" Apr 30 03:37:55.130074 kubelet[2782]: I0430 03:37:55.129971 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/949f0373-8fc8-4730-927b-7b5ade2aa2a2-xtables-lock\") pod \"kube-proxy-k8rhl\" (UID: \"949f0373-8fc8-4730-927b-7b5ade2aa2a2\") " pod="kube-system/kube-proxy-k8rhl" Apr 30 03:37:55.130074 kubelet[2782]: I0430 03:37:55.129986 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl4hf\" (UniqueName: \"kubernetes.io/projected/949f0373-8fc8-4730-927b-7b5ade2aa2a2-kube-api-access-fl4hf\") pod \"kube-proxy-k8rhl\" (UID: \"949f0373-8fc8-4730-927b-7b5ade2aa2a2\") " pod="kube-system/kube-proxy-k8rhl" Apr 30 03:37:55.130074 kubelet[2782]: I0430 03:37:55.130000 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/949f0373-8fc8-4730-927b-7b5ade2aa2a2-lib-modules\") pod \"kube-proxy-k8rhl\" (UID: \"949f0373-8fc8-4730-927b-7b5ade2aa2a2\") " pod="kube-system/kube-proxy-k8rhl" Apr 30 03:37:55.205517 kubelet[2782]: I0430 03:37:55.205405 2782 topology_manager.go:215] "Topology Admit Handler" podUID="75fc7912-3bab-4151-8543-46293ee86019" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-p6drq" Apr 30 03:37:55.211347 systemd[1]: Created slice kubepods-besteffort-pod75fc7912_3bab_4151_8543_46293ee86019.slice - libcontainer container kubepods-besteffort-pod75fc7912_3bab_4151_8543_46293ee86019.slice. Apr 30 03:37:55.231280 kubelet[2782]: I0430 03:37:55.231066 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2v7\" (UniqueName: \"kubernetes.io/projected/75fc7912-3bab-4151-8543-46293ee86019-kube-api-access-4q2v7\") pod \"tigera-operator-797db67f8-p6drq\" (UID: \"75fc7912-3bab-4151-8543-46293ee86019\") " pod="tigera-operator/tigera-operator-797db67f8-p6drq" Apr 30 03:37:55.231280 kubelet[2782]: I0430 03:37:55.231103 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75fc7912-3bab-4151-8543-46293ee86019-var-lib-calico\") pod \"tigera-operator-797db67f8-p6drq\" (UID: \"75fc7912-3bab-4151-8543-46293ee86019\") " pod="tigera-operator/tigera-operator-797db67f8-p6drq" Apr 30 03:37:55.416899 containerd[1507]: time="2025-04-30T03:37:55.416692397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8rhl,Uid:949f0373-8fc8-4730-927b-7b5ade2aa2a2,Namespace:kube-system,Attempt:0,}" Apr 30 03:37:55.460470 containerd[1507]: time="2025-04-30T03:37:55.460285567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:37:55.461122 containerd[1507]: time="2025-04-30T03:37:55.460382279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:37:55.461122 containerd[1507]: time="2025-04-30T03:37:55.460791788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:55.461122 containerd[1507]: time="2025-04-30T03:37:55.460981223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:55.497885 systemd[1]: Started cri-containerd-b794ea7a7739949201e804d0ccbf5a65882b73897e9642d660ec561b20a9e2d4.scope - libcontainer container b794ea7a7739949201e804d0ccbf5a65882b73897e9642d660ec561b20a9e2d4. Apr 30 03:37:55.517095 containerd[1507]: time="2025-04-30T03:37:55.517024832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-p6drq,Uid:75fc7912-3bab-4151-8543-46293ee86019,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:37:55.538414 containerd[1507]: time="2025-04-30T03:37:55.538248286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8rhl,Uid:949f0373-8fc8-4730-927b-7b5ade2aa2a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b794ea7a7739949201e804d0ccbf5a65882b73897e9642d660ec561b20a9e2d4\"" Apr 30 03:37:55.542872 containerd[1507]: time="2025-04-30T03:37:55.542728516Z" level=info msg="CreateContainer within sandbox \"b794ea7a7739949201e804d0ccbf5a65882b73897e9642d660ec561b20a9e2d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:37:55.570386 containerd[1507]: time="2025-04-30T03:37:55.559766083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:37:55.570386 containerd[1507]: time="2025-04-30T03:37:55.559866172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:37:55.570386 containerd[1507]: time="2025-04-30T03:37:55.559894966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:55.570386 containerd[1507]: time="2025-04-30T03:37:55.560104469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:37:55.580904 containerd[1507]: time="2025-04-30T03:37:55.578697416Z" level=info msg="CreateContainer within sandbox \"b794ea7a7739949201e804d0ccbf5a65882b73897e9642d660ec561b20a9e2d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c90b9c904b3b0be377ccfb0d161be99d51d8beb5ef3db3cd2af9cfa40cbaa49\"" Apr 30 03:37:55.580904 containerd[1507]: time="2025-04-30T03:37:55.579697604Z" level=info msg="StartContainer for \"5c90b9c904b3b0be377ccfb0d161be99d51d8beb5ef3db3cd2af9cfa40cbaa49\"" Apr 30 03:37:55.583433 systemd[1]: Started cri-containerd-096f471966e2374861ae0ce4d90ab4fdf014ab4960a09db38ead98dd780ec26e.scope - libcontainer container 096f471966e2374861ae0ce4d90ab4fdf014ab4960a09db38ead98dd780ec26e. Apr 30 03:37:55.602732 systemd[1]: Started cri-containerd-5c90b9c904b3b0be377ccfb0d161be99d51d8beb5ef3db3cd2af9cfa40cbaa49.scope - libcontainer container 5c90b9c904b3b0be377ccfb0d161be99d51d8beb5ef3db3cd2af9cfa40cbaa49. Apr 30 03:37:55.623019 containerd[1507]: time="2025-04-30T03:37:55.622884442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-p6drq,Uid:75fc7912-3bab-4151-8543-46293ee86019,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"096f471966e2374861ae0ce4d90ab4fdf014ab4960a09db38ead98dd780ec26e\"" Apr 30 03:37:55.628396 containerd[1507]: time="2025-04-30T03:37:55.628353897Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:37:55.637042 containerd[1507]: time="2025-04-30T03:37:55.636866885Z" level=info msg="StartContainer for \"5c90b9c904b3b0be377ccfb0d161be99d51d8beb5ef3db3cd2af9cfa40cbaa49\" returns successfully" Apr 30 03:37:56.007994 kubelet[2782]: I0430 03:37:56.007383 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k8rhl" podStartSLOduration=1.007359407 podStartE2EDuration="1.007359407s" podCreationTimestamp="2025-04-30 03:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:37:56.007011634 +0000 UTC m=+16.177543315" watchObservedRunningTime="2025-04-30 03:37:56.007359407 +0000 UTC m=+16.177891089" Apr 30 03:38:02.736464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981469501.mount: Deactivated successfully. Apr 30 03:38:03.118950 containerd[1507]: time="2025-04-30T03:38:03.118892896Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:03.120123 containerd[1507]: time="2025-04-30T03:38:03.120074826Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:38:03.121212 containerd[1507]: time="2025-04-30T03:38:03.121173929Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:03.123329 containerd[1507]: time="2025-04-30T03:38:03.123309739Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:03.124404 containerd[1507]: time="2025-04-30T03:38:03.123924053Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 7.495527717s" Apr 30 03:38:03.124404 containerd[1507]: time="2025-04-30T03:38:03.123962114Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:38:03.133306 containerd[1507]: time="2025-04-30T03:38:03.133250372Z" level=info msg="CreateContainer within sandbox \"096f471966e2374861ae0ce4d90ab4fdf014ab4960a09db38ead98dd780ec26e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:38:03.146477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243333519.mount: Deactivated successfully. Apr 30 03:38:03.148324 containerd[1507]: time="2025-04-30T03:38:03.148267167Z" level=info msg="CreateContainer within sandbox \"096f471966e2374861ae0ce4d90ab4fdf014ab4960a09db38ead98dd780ec26e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4\"" Apr 30 03:38:03.153262 containerd[1507]: time="2025-04-30T03:38:03.153235736Z" level=info msg="StartContainer for \"4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4\"" Apr 30 03:38:03.183781 systemd[1]: Started cri-containerd-4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4.scope - libcontainer container 4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4. Apr 30 03:38:03.212575 containerd[1507]: time="2025-04-30T03:38:03.212509663Z" level=info msg="StartContainer for \"4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4\" returns successfully" Apr 30 03:38:06.177885 kubelet[2782]: I0430 03:38:06.177746 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-p6drq" podStartSLOduration=3.6741627 podStartE2EDuration="11.17772051s" podCreationTimestamp="2025-04-30 03:37:55 +0000 UTC" firstStartedPulling="2025-04-30 03:37:55.624170144 +0000 UTC m=+15.794701787" lastFinishedPulling="2025-04-30 03:38:03.127727955 +0000 UTC m=+23.298259597" observedRunningTime="2025-04-30 03:38:04.049577408 +0000 UTC m=+24.220109090" watchObservedRunningTime="2025-04-30 03:38:06.17772051 +0000 UTC m=+26.348252171" Apr 30 03:38:06.178498 kubelet[2782]: I0430 03:38:06.178379 2782 topology_manager.go:215] "Topology Admit Handler" podUID="ac7ef1dc-42f9-4272-8621-f14693d6c324" podNamespace="calico-system" podName="calico-typha-9dff8546c-chpw8" Apr 30 03:38:06.204134 systemd[1]: Created slice kubepods-besteffort-podac7ef1dc_42f9_4272_8621_f14693d6c324.slice - libcontainer container kubepods-besteffort-podac7ef1dc_42f9_4272_8621_f14693d6c324.slice. Apr 30 03:38:06.265811 kubelet[2782]: I0430 03:38:06.263813 2782 topology_manager.go:215] "Topology Admit Handler" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" podNamespace="calico-system" podName="calico-node-958wz" Apr 30 03:38:06.270471 kubelet[2782]: W0430 03:38:06.270337 2782 reflector.go:547] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081-3-3-9-5ae3ade3a2" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-9-5ae3ade3a2' and this object Apr 30 03:38:06.270471 kubelet[2782]: E0430 03:38:06.270380 2782 reflector.go:150] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081-3-3-9-5ae3ade3a2" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-9-5ae3ade3a2' and this object Apr 30 03:38:06.271239 kubelet[2782]: W0430 03:38:06.271206 2782 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081-3-3-9-5ae3ade3a2" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-9-5ae3ade3a2' and this object Apr 30 03:38:06.271239 kubelet[2782]: E0430 03:38:06.271231 2782 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081-3-3-9-5ae3ade3a2" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-9-5ae3ade3a2' and this object Apr 30 03:38:06.272136 systemd[1]: Created slice kubepods-besteffort-podbfa7f9ba_cc22_4311_91e3_5a0a4bb7af02.slice - libcontainer container kubepods-besteffort-podbfa7f9ba_cc22_4311_91e3_5a0a4bb7af02.slice. Apr 30 03:38:06.339988 kubelet[2782]: I0430 03:38:06.339930 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-net-dir\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340139 kubelet[2782]: I0430 03:38:06.340001 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-flexvol-driver-host\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340139 kubelet[2782]: I0430 03:38:06.340029 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2xz\" (UniqueName: \"kubernetes.io/projected/ac7ef1dc-42f9-4272-8621-f14693d6c324-kube-api-access-sk2xz\") pod \"calico-typha-9dff8546c-chpw8\" (UID: \"ac7ef1dc-42f9-4272-8621-f14693d6c324\") " pod="calico-system/calico-typha-9dff8546c-chpw8" Apr 30 03:38:06.340139 kubelet[2782]: I0430 03:38:06.340055 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-lib-modules\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340139 kubelet[2782]: I0430 03:38:06.340071 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-xtables-lock\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340139 kubelet[2782]: I0430 03:38:06.340090 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-policysync\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340243 kubelet[2782]: I0430 03:38:06.340104 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-tigera-ca-bundle\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340243 kubelet[2782]: I0430 03:38:06.340118 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ac7ef1dc-42f9-4272-8621-f14693d6c324-typha-certs\") pod \"calico-typha-9dff8546c-chpw8\" (UID: \"ac7ef1dc-42f9-4272-8621-f14693d6c324\") " pod="calico-system/calico-typha-9dff8546c-chpw8" Apr 30 03:38:06.340243 kubelet[2782]: I0430 03:38:06.340134 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-node-certs\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340243 kubelet[2782]: I0430 03:38:06.340147 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-bin-dir\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340243 kubelet[2782]: I0430 03:38:06.340161 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-var-run-calico\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340377 kubelet[2782]: I0430 03:38:06.340177 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac7ef1dc-42f9-4272-8621-f14693d6c324-tigera-ca-bundle\") pod \"calico-typha-9dff8546c-chpw8\" (UID: \"ac7ef1dc-42f9-4272-8621-f14693d6c324\") " pod="calico-system/calico-typha-9dff8546c-chpw8" Apr 30 03:38:06.340377 kubelet[2782]: I0430 03:38:06.340199 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zldnb\" (UniqueName: \"kubernetes.io/projected/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-kube-api-access-zldnb\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340377 kubelet[2782]: I0430 03:38:06.340214 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-log-dir\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.340377 kubelet[2782]: I0430 03:38:06.340228 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-var-lib-calico\") pod \"calico-node-958wz\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " pod="calico-system/calico-node-958wz" Apr 30 03:38:06.392111 kubelet[2782]: I0430 03:38:06.391576 2782 topology_manager.go:215] "Topology Admit Handler" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" podNamespace="calico-system" podName="csi-node-driver-rjjf8" Apr 30 03:38:06.392111 kubelet[2782]: E0430 03:38:06.391868 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:06.442717 kubelet[2782]: I0430 03:38:06.441359 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4d11b5f7-a801-4c03-8af8-692f5d9587bd-socket-dir\") pod \"csi-node-driver-rjjf8\" (UID: \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\") " pod="calico-system/csi-node-driver-rjjf8" Apr 30 03:38:06.442717 kubelet[2782]: I0430 03:38:06.441421 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4d11b5f7-a801-4c03-8af8-692f5d9587bd-registration-dir\") pod \"csi-node-driver-rjjf8\" (UID: \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\") " pod="calico-system/csi-node-driver-rjjf8" Apr 30 03:38:06.442717 kubelet[2782]: I0430 03:38:06.441488 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d11b5f7-a801-4c03-8af8-692f5d9587bd-kubelet-dir\") pod \"csi-node-driver-rjjf8\" (UID: \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\") " pod="calico-system/csi-node-driver-rjjf8" Apr 30 03:38:06.442717 kubelet[2782]: I0430 03:38:06.441503 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8sx\" (UniqueName: \"kubernetes.io/projected/4d11b5f7-a801-4c03-8af8-692f5d9587bd-kube-api-access-fc8sx\") pod \"csi-node-driver-rjjf8\" (UID: \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\") " pod="calico-system/csi-node-driver-rjjf8" Apr 30 03:38:06.442717 kubelet[2782]: I0430 03:38:06.441588 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4d11b5f7-a801-4c03-8af8-692f5d9587bd-varrun\") pod \"csi-node-driver-rjjf8\" (UID: \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\") " pod="calico-system/csi-node-driver-rjjf8" Apr 30 03:38:06.447698 kubelet[2782]: E0430 03:38:06.447682 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.447769 kubelet[2782]: W0430 03:38:06.447759 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.447817 kubelet[2782]: E0430 03:38:06.447809 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.448039 kubelet[2782]: E0430 03:38:06.448011 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.448158 kubelet[2782]: W0430 03:38:06.448098 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.448158 kubelet[2782]: E0430 03:38:06.448111 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.448455 kubelet[2782]: E0430 03:38:06.448330 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.448455 kubelet[2782]: W0430 03:38:06.448357 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.448455 kubelet[2782]: E0430 03:38:06.448365 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.448645 kubelet[2782]: E0430 03:38:06.448580 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.448645 kubelet[2782]: W0430 03:38:06.448587 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.448645 kubelet[2782]: E0430 03:38:06.448596 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.448848 kubelet[2782]: E0430 03:38:06.448840 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.449006 kubelet[2782]: W0430 03:38:06.448890 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.449006 kubelet[2782]: E0430 03:38:06.448902 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.450693 kubelet[2782]: E0430 03:38:06.449105 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.450693 kubelet[2782]: W0430 03:38:06.449116 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.450693 kubelet[2782]: E0430 03:38:06.449123 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.450991 kubelet[2782]: E0430 03:38:06.450940 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.450991 kubelet[2782]: W0430 03:38:06.450951 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.450991 kubelet[2782]: E0430 03:38:06.450960 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.451930 kubelet[2782]: E0430 03:38:06.451797 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.451930 kubelet[2782]: W0430 03:38:06.451824 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.451930 kubelet[2782]: E0430 03:38:06.451851 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.456635 kubelet[2782]: E0430 03:38:06.455862 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.456635 kubelet[2782]: W0430 03:38:06.455879 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.456635 kubelet[2782]: E0430 03:38:06.455895 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.456635 kubelet[2782]: E0430 03:38:06.456037 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.456635 kubelet[2782]: W0430 03:38:06.456043 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.456635 kubelet[2782]: E0430 03:38:06.456050 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.456635 kubelet[2782]: E0430 03:38:06.456175 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.456635 kubelet[2782]: W0430 03:38:06.456181 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.456635 kubelet[2782]: E0430 03:38:06.456188 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.458657 kubelet[2782]: E0430 03:38:06.456953 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.458657 kubelet[2782]: W0430 03:38:06.456965 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.458657 kubelet[2782]: E0430 03:38:06.456974 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.459014 kubelet[2782]: E0430 03:38:06.458990 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.459014 kubelet[2782]: W0430 03:38:06.459007 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.460647 kubelet[2782]: E0430 03:38:06.459967 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.460647 kubelet[2782]: W0430 03:38:06.459978 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.460647 kubelet[2782]: E0430 03:38:06.459986 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.460647 kubelet[2782]: E0430 03:38:06.460219 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.460647 kubelet[2782]: W0430 03:38:06.460226 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.460647 kubelet[2782]: E0430 03:38:06.460234 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.460647 kubelet[2782]: E0430 03:38:06.460438 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.460647 kubelet[2782]: W0430 03:38:06.460445 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.460647 kubelet[2782]: E0430 03:38:06.460453 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.460891 kubelet[2782]: E0430 03:38:06.460874 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.464136 kubelet[2782]: E0430 03:38:06.463929 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.464136 kubelet[2782]: W0430 03:38:06.463943 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.464136 kubelet[2782]: E0430 03:38:06.463957 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.464526 kubelet[2782]: E0430 03:38:06.464505 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.464560 kubelet[2782]: W0430 03:38:06.464550 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.464580 kubelet[2782]: E0430 03:38:06.464567 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.466168 kubelet[2782]: E0430 03:38:06.466143 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.466168 kubelet[2782]: W0430 03:38:06.466169 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.466237 kubelet[2782]: E0430 03:38:06.466178 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.471133 kubelet[2782]: E0430 03:38:06.471114 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.471257 kubelet[2782]: W0430 03:38:06.471206 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.471257 kubelet[2782]: E0430 03:38:06.471226 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.512108 containerd[1507]: time="2025-04-30T03:38:06.511990157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9dff8546c-chpw8,Uid:ac7ef1dc-42f9-4272-8621-f14693d6c324,Namespace:calico-system,Attempt:0,}" Apr 30 03:38:06.540744 containerd[1507]: time="2025-04-30T03:38:06.540195959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:38:06.540744 containerd[1507]: time="2025-04-30T03:38:06.540329300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:38:06.540744 containerd[1507]: time="2025-04-30T03:38:06.540356872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:38:06.540744 containerd[1507]: time="2025-04-30T03:38:06.540463682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:38:06.542687 kubelet[2782]: E0430 03:38:06.542659 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.542954 kubelet[2782]: W0430 03:38:06.542776 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.542954 kubelet[2782]: E0430 03:38:06.542804 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.543870 kubelet[2782]: E0430 03:38:06.543844 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.543870 kubelet[2782]: W0430 03:38:06.543854 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.543870 kubelet[2782]: E0430 03:38:06.543869 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.544149 kubelet[2782]: E0430 03:38:06.544112 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.544149 kubelet[2782]: W0430 03:38:06.544124 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.544505 kubelet[2782]: E0430 03:38:06.544284 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.544505 kubelet[2782]: E0430 03:38:06.544391 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.544505 kubelet[2782]: W0430 03:38:06.544397 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.544505 kubelet[2782]: E0430 03:38:06.544446 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.545135 kubelet[2782]: E0430 03:38:06.544526 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.545135 kubelet[2782]: W0430 03:38:06.544532 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.545135 kubelet[2782]: E0430 03:38:06.544540 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.545135 kubelet[2782]: E0430 03:38:06.544696 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.545135 kubelet[2782]: W0430 03:38:06.544703 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.545135 kubelet[2782]: E0430 03:38:06.544725 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.545135 kubelet[2782]: E0430 03:38:06.544845 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.545135 kubelet[2782]: W0430 03:38:06.544851 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.545135 kubelet[2782]: E0430 03:38:06.544880 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.545135 kubelet[2782]: E0430 03:38:06.545081 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.545408 kubelet[2782]: W0430 03:38:06.545089 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.545408 kubelet[2782]: E0430 03:38:06.545098 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.545408 kubelet[2782]: E0430 03:38:06.545252 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.545408 kubelet[2782]: W0430 03:38:06.545259 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.545408 kubelet[2782]: E0430 03:38:06.545267 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.545801 kubelet[2782]: E0430 03:38:06.545569 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.545801 kubelet[2782]: W0430 03:38:06.545576 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.545801 kubelet[2782]: E0430 03:38:06.545585 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.545971 kubelet[2782]: E0430 03:38:06.545941 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.545971 kubelet[2782]: W0430 03:38:06.545957 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.546019 kubelet[2782]: E0430 03:38:06.545993 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.546396 kubelet[2782]: E0430 03:38:06.546369 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.546396 kubelet[2782]: W0430 03:38:06.546383 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.546396 kubelet[2782]: E0430 03:38:06.546396 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.546572 kubelet[2782]: E0430 03:38:06.546548 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.546572 kubelet[2782]: W0430 03:38:06.546560 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.548784 kubelet[2782]: E0430 03:38:06.548678 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.548840 kubelet[2782]: E0430 03:38:06.548817 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.548840 kubelet[2782]: W0430 03:38:06.548823 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.550509 kubelet[2782]: E0430 03:38:06.550118 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.550572 kubelet[2782]: E0430 03:38:06.550553 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.550572 kubelet[2782]: W0430 03:38:06.550568 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.550784 kubelet[2782]: E0430 03:38:06.550633 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.550784 kubelet[2782]: E0430 03:38:06.550774 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.550784 kubelet[2782]: W0430 03:38:06.550780 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.551411 kubelet[2782]: E0430 03:38:06.551308 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.551466 kubelet[2782]: E0430 03:38:06.551448 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.551466 kubelet[2782]: W0430 03:38:06.551461 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.551749 kubelet[2782]: E0430 03:38:06.551509 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.551749 kubelet[2782]: E0430 03:38:06.551684 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.551749 kubelet[2782]: W0430 03:38:06.551691 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.551749 kubelet[2782]: E0430 03:38:06.551741 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.551936 kubelet[2782]: E0430 03:38:06.551840 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.551936 kubelet[2782]: W0430 03:38:06.551847 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.552092 kubelet[2782]: E0430 03:38:06.552067 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.552172 kubelet[2782]: E0430 03:38:06.552151 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.552172 kubelet[2782]: W0430 03:38:06.552163 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.552279 kubelet[2782]: E0430 03:38:06.552225 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.552347 kubelet[2782]: E0430 03:38:06.552327 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.552347 kubelet[2782]: W0430 03:38:06.552338 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.552415 kubelet[2782]: E0430 03:38:06.552357 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.552720 kubelet[2782]: E0430 03:38:06.552698 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.552720 kubelet[2782]: W0430 03:38:06.552710 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.552785 kubelet[2782]: E0430 03:38:06.552724 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.553128 kubelet[2782]: E0430 03:38:06.553106 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.553128 kubelet[2782]: W0430 03:38:06.553119 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.553195 kubelet[2782]: E0430 03:38:06.553137 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.554371 kubelet[2782]: E0430 03:38:06.553477 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.554371 kubelet[2782]: W0430 03:38:06.553488 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.554371 kubelet[2782]: E0430 03:38:06.553501 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.554371 kubelet[2782]: E0430 03:38:06.553719 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.554371 kubelet[2782]: W0430 03:38:06.553726 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.554570 kubelet[2782]: E0430 03:38:06.554547 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.554804 kubelet[2782]: E0430 03:38:06.554784 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.554804 kubelet[2782]: W0430 03:38:06.554798 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.554804 kubelet[2782]: E0430 03:38:06.554806 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.560976 kubelet[2782]: E0430 03:38:06.560951 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.561159 kubelet[2782]: W0430 03:38:06.560989 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.561159 kubelet[2782]: E0430 03:38:06.561003 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.571953 systemd[1]: Started cri-containerd-99b4f8c89f63ea8ca29139dc5b029b12e8e1f81e68812b59a4df94cc89d7830b.scope - libcontainer container 99b4f8c89f63ea8ca29139dc5b029b12e8e1f81e68812b59a4df94cc89d7830b. Apr 30 03:38:06.628069 containerd[1507]: time="2025-04-30T03:38:06.628025346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9dff8546c-chpw8,Uid:ac7ef1dc-42f9-4272-8621-f14693d6c324,Namespace:calico-system,Attempt:0,} returns sandbox id \"99b4f8c89f63ea8ca29139dc5b029b12e8e1f81e68812b59a4df94cc89d7830b\"" Apr 30 03:38:06.639717 containerd[1507]: time="2025-04-30T03:38:06.639665182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:38:06.646743 kubelet[2782]: E0430 03:38:06.646706 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.646743 kubelet[2782]: W0430 03:38:06.646730 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.647228 kubelet[2782]: E0430 03:38:06.646748 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.748957 kubelet[2782]: E0430 03:38:06.748784 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.748957 kubelet[2782]: W0430 03:38:06.748817 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.748957 kubelet[2782]: E0430 03:38:06.748845 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.853292 kubelet[2782]: E0430 03:38:06.853232 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.853292 kubelet[2782]: W0430 03:38:06.853269 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.853292 kubelet[2782]: E0430 03:38:06.853298 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:06.954513 kubelet[2782]: E0430 03:38:06.954449 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:06.954513 kubelet[2782]: W0430 03:38:06.954488 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:06.954513 kubelet[2782]: E0430 03:38:06.954516 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.056273 kubelet[2782]: E0430 03:38:07.056100 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.056273 kubelet[2782]: W0430 03:38:07.056136 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.056273 kubelet[2782]: E0430 03:38:07.056168 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.157161 kubelet[2782]: E0430 03:38:07.157098 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.157161 kubelet[2782]: W0430 03:38:07.157136 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.157161 kubelet[2782]: E0430 03:38:07.157167 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.258676 kubelet[2782]: E0430 03:38:07.258566 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.259949 kubelet[2782]: W0430 03:38:07.258604 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.259949 kubelet[2782]: E0430 03:38:07.258729 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.360099 kubelet[2782]: E0430 03:38:07.359888 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.360099 kubelet[2782]: W0430 03:38:07.359921 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.360099 kubelet[2782]: E0430 03:38:07.359949 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.461270 kubelet[2782]: E0430 03:38:07.461204 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.461270 kubelet[2782]: W0430 03:38:07.461254 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.461270 kubelet[2782]: E0430 03:38:07.461274 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.468410 kubelet[2782]: E0430 03:38:07.468360 2782 secret.go:194] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 30 03:38:07.468509 kubelet[2782]: E0430 03:38:07.468464 2782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-node-certs podName:bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02 nodeName:}" failed. No retries permitted until 2025-04-30 03:38:07.968442032 +0000 UTC m=+28.138973674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-node-certs") pod "calico-node-958wz" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02") : failed to sync secret cache: timed out waiting for the condition Apr 30 03:38:07.563508 kubelet[2782]: E0430 03:38:07.563225 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.563508 kubelet[2782]: W0430 03:38:07.563266 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.563508 kubelet[2782]: E0430 03:38:07.563354 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.665023 kubelet[2782]: E0430 03:38:07.664889 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.665023 kubelet[2782]: W0430 03:38:07.664924 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.665023 kubelet[2782]: E0430 03:38:07.664953 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.766322 kubelet[2782]: E0430 03:38:07.766273 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.766322 kubelet[2782]: W0430 03:38:07.766316 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.766697 kubelet[2782]: E0430 03:38:07.766345 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.868186 kubelet[2782]: E0430 03:38:07.868054 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.868186 kubelet[2782]: W0430 03:38:07.868086 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.868186 kubelet[2782]: E0430 03:38:07.868114 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.934758 kubelet[2782]: E0430 03:38:07.931945 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:07.969423 kubelet[2782]: E0430 03:38:07.969351 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.969423 kubelet[2782]: W0430 03:38:07.969396 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.969817 kubelet[2782]: E0430 03:38:07.969433 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.969985 kubelet[2782]: E0430 03:38:07.969941 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.969985 kubelet[2782]: W0430 03:38:07.969971 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.970099 kubelet[2782]: E0430 03:38:07.969993 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.970345 kubelet[2782]: E0430 03:38:07.970305 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.970345 kubelet[2782]: W0430 03:38:07.970331 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.970499 kubelet[2782]: E0430 03:38:07.970352 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.970726 kubelet[2782]: E0430 03:38:07.970703 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.970726 kubelet[2782]: W0430 03:38:07.970724 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.970874 kubelet[2782]: E0430 03:38:07.970738 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.971090 kubelet[2782]: E0430 03:38:07.971033 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.971090 kubelet[2782]: W0430 03:38:07.971049 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.971090 kubelet[2782]: E0430 03:38:07.971065 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:07.982362 kubelet[2782]: E0430 03:38:07.981760 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:07.982362 kubelet[2782]: W0430 03:38:07.981786 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:07.982362 kubelet[2782]: E0430 03:38:07.981810 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:08.077094 containerd[1507]: time="2025-04-30T03:38:08.076981156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-958wz,Uid:bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02,Namespace:calico-system,Attempt:0,}" Apr 30 03:38:08.136012 containerd[1507]: time="2025-04-30T03:38:08.135708094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:38:08.136012 containerd[1507]: time="2025-04-30T03:38:08.135778466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:38:08.136012 containerd[1507]: time="2025-04-30T03:38:08.135797642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:38:08.136012 containerd[1507]: time="2025-04-30T03:38:08.135888633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:38:08.168053 systemd[1]: Started cri-containerd-8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4.scope - libcontainer container 8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4. Apr 30 03:38:08.189954 containerd[1507]: time="2025-04-30T03:38:08.189818563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-958wz,Uid:bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\"" Apr 30 03:38:09.490401 containerd[1507]: time="2025-04-30T03:38:09.490349466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:09.491403 containerd[1507]: time="2025-04-30T03:38:09.491356356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:38:09.492594 containerd[1507]: time="2025-04-30T03:38:09.492552993Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:09.495149 containerd[1507]: time="2025-04-30T03:38:09.495109533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:09.495932 containerd[1507]: time="2025-04-30T03:38:09.495884689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.856171027s" Apr 30 03:38:09.495975 containerd[1507]: time="2025-04-30T03:38:09.495936948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:38:09.497950 containerd[1507]: time="2025-04-30T03:38:09.497747387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:38:09.510515 containerd[1507]: time="2025-04-30T03:38:09.510465540Z" level=info msg="CreateContainer within sandbox \"99b4f8c89f63ea8ca29139dc5b029b12e8e1f81e68812b59a4df94cc89d7830b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:38:09.527949 containerd[1507]: time="2025-04-30T03:38:09.527890232Z" level=info msg="CreateContainer within sandbox \"99b4f8c89f63ea8ca29139dc5b029b12e8e1f81e68812b59a4df94cc89d7830b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e86c6ffe982c7968b104646bc741b37170eb0f509a7e0259b723a7082c01bed1\"" Apr 30 03:38:09.529335 containerd[1507]: time="2025-04-30T03:38:09.528542688Z" level=info msg="StartContainer for \"e86c6ffe982c7968b104646bc741b37170eb0f509a7e0259b723a7082c01bed1\"" Apr 30 03:38:09.573790 systemd[1]: Started cri-containerd-e86c6ffe982c7968b104646bc741b37170eb0f509a7e0259b723a7082c01bed1.scope - libcontainer container e86c6ffe982c7968b104646bc741b37170eb0f509a7e0259b723a7082c01bed1. Apr 30 03:38:09.614372 containerd[1507]: time="2025-04-30T03:38:09.614242009Z" level=info msg="StartContainer for \"e86c6ffe982c7968b104646bc741b37170eb0f509a7e0259b723a7082c01bed1\" returns successfully" Apr 30 03:38:09.934580 kubelet[2782]: E0430 03:38:09.932901 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:10.124022 kubelet[2782]: E0430 03:38:10.123964 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.124022 kubelet[2782]: W0430 03:38:10.123997 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.125282 kubelet[2782]: E0430 03:38:10.124043 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.125282 kubelet[2782]: E0430 03:38:10.124336 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.125282 kubelet[2782]: W0430 03:38:10.124354 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.125282 kubelet[2782]: E0430 03:38:10.124375 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.125282 kubelet[2782]: E0430 03:38:10.124753 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.125282 kubelet[2782]: W0430 03:38:10.124772 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.125282 kubelet[2782]: E0430 03:38:10.124872 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.126326 kubelet[2782]: E0430 03:38:10.125308 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.126326 kubelet[2782]: W0430 03:38:10.125323 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.126326 kubelet[2782]: E0430 03:38:10.125368 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.126326 kubelet[2782]: E0430 03:38:10.125771 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.126326 kubelet[2782]: W0430 03:38:10.125789 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.126326 kubelet[2782]: E0430 03:38:10.125809 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.126326 kubelet[2782]: E0430 03:38:10.126149 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.126326 kubelet[2782]: W0430 03:38:10.126166 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.126326 kubelet[2782]: E0430 03:38:10.126187 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.126868 kubelet[2782]: E0430 03:38:10.126496 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.126868 kubelet[2782]: W0430 03:38:10.126511 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.126868 kubelet[2782]: E0430 03:38:10.126549 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.127058 kubelet[2782]: E0430 03:38:10.126956 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.127058 kubelet[2782]: W0430 03:38:10.126972 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.127058 kubelet[2782]: E0430 03:38:10.126989 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.127372 kubelet[2782]: E0430 03:38:10.127337 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.127372 kubelet[2782]: W0430 03:38:10.127353 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.127695 kubelet[2782]: E0430 03:38:10.127384 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.127695 kubelet[2782]: E0430 03:38:10.127652 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.127799 kubelet[2782]: W0430 03:38:10.127706 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.127799 kubelet[2782]: E0430 03:38:10.127722 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.128115 kubelet[2782]: E0430 03:38:10.128081 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.128115 kubelet[2782]: W0430 03:38:10.128104 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.128223 kubelet[2782]: E0430 03:38:10.128120 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.128470 kubelet[2782]: E0430 03:38:10.128435 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.128574 kubelet[2782]: W0430 03:38:10.128485 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.128574 kubelet[2782]: E0430 03:38:10.128500 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.128892 kubelet[2782]: E0430 03:38:10.128862 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.128892 kubelet[2782]: W0430 03:38:10.128876 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.128892 kubelet[2782]: E0430 03:38:10.128890 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.129250 kubelet[2782]: E0430 03:38:10.129231 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.129250 kubelet[2782]: W0430 03:38:10.129248 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.129391 kubelet[2782]: E0430 03:38:10.129264 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.129526 kubelet[2782]: E0430 03:38:10.129497 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.129526 kubelet[2782]: W0430 03:38:10.129521 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.129661 kubelet[2782]: E0430 03:38:10.129535 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.187246 kubelet[2782]: E0430 03:38:10.185251 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.187246 kubelet[2782]: W0430 03:38:10.185286 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.187246 kubelet[2782]: E0430 03:38:10.185312 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.187246 kubelet[2782]: E0430 03:38:10.185596 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.187246 kubelet[2782]: W0430 03:38:10.185635 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.187246 kubelet[2782]: E0430 03:38:10.185650 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.187246 kubelet[2782]: E0430 03:38:10.185915 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.187246 kubelet[2782]: W0430 03:38:10.185927 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.187246 kubelet[2782]: E0430 03:38:10.185943 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.187246 kubelet[2782]: E0430 03:38:10.186194 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.187677 kubelet[2782]: W0430 03:38:10.186206 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.187677 kubelet[2782]: E0430 03:38:10.186220 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.187677 kubelet[2782]: E0430 03:38:10.186425 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.187677 kubelet[2782]: W0430 03:38:10.186436 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.187677 kubelet[2782]: E0430 03:38:10.186450 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.187677 kubelet[2782]: E0430 03:38:10.186663 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.187677 kubelet[2782]: W0430 03:38:10.186694 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.187677 kubelet[2782]: E0430 03:38:10.186707 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.187677 kubelet[2782]: E0430 03:38:10.186945 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.187677 kubelet[2782]: W0430 03:38:10.186959 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.188218 kubelet[2782]: E0430 03:38:10.186972 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.188218 kubelet[2782]: E0430 03:38:10.187886 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.188218 kubelet[2782]: W0430 03:38:10.187903 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.188218 kubelet[2782]: E0430 03:38:10.187935 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.188397 kubelet[2782]: E0430 03:38:10.188222 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.188397 kubelet[2782]: W0430 03:38:10.188238 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.188397 kubelet[2782]: E0430 03:38:10.188266 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.189131 kubelet[2782]: E0430 03:38:10.188600 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.189131 kubelet[2782]: W0430 03:38:10.188644 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.189131 kubelet[2782]: E0430 03:38:10.188683 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.189131 kubelet[2782]: E0430 03:38:10.188925 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.189131 kubelet[2782]: W0430 03:38:10.188941 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.189131 kubelet[2782]: E0430 03:38:10.188965 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.189418 kubelet[2782]: E0430 03:38:10.189196 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.189418 kubelet[2782]: W0430 03:38:10.189208 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.189418 kubelet[2782]: E0430 03:38:10.189221 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.189578 kubelet[2782]: E0430 03:38:10.189451 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.189578 kubelet[2782]: W0430 03:38:10.189463 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.189578 kubelet[2782]: E0430 03:38:10.189476 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.190030 kubelet[2782]: E0430 03:38:10.189998 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.190030 kubelet[2782]: W0430 03:38:10.190028 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.190196 kubelet[2782]: E0430 03:38:10.190160 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.190437 kubelet[2782]: E0430 03:38:10.190325 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.190437 kubelet[2782]: W0430 03:38:10.190337 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.190437 kubelet[2782]: E0430 03:38:10.190349 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.190589 kubelet[2782]: E0430 03:38:10.190577 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.190731 kubelet[2782]: W0430 03:38:10.190696 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.190731 kubelet[2782]: E0430 03:38:10.190717 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.190980 kubelet[2782]: E0430 03:38:10.190955 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.190980 kubelet[2782]: W0430 03:38:10.190969 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.190980 kubelet[2782]: E0430 03:38:10.190981 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:10.191446 kubelet[2782]: E0430 03:38:10.191417 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:10.191446 kubelet[2782]: W0430 03:38:10.191439 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:10.191536 kubelet[2782]: E0430 03:38:10.191457 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.042731 kubelet[2782]: I0430 03:38:11.042072 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:38:11.136422 kubelet[2782]: E0430 03:38:11.136358 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.136422 kubelet[2782]: W0430 03:38:11.136388 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.136422 kubelet[2782]: E0430 03:38:11.136411 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.136864 kubelet[2782]: E0430 03:38:11.136606 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.136864 kubelet[2782]: W0430 03:38:11.136647 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.136864 kubelet[2782]: E0430 03:38:11.136657 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.136864 kubelet[2782]: E0430 03:38:11.136893 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.136864 kubelet[2782]: W0430 03:38:11.136904 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.138467 kubelet[2782]: E0430 03:38:11.136936 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.138467 kubelet[2782]: E0430 03:38:11.137111 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.138467 kubelet[2782]: W0430 03:38:11.137120 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.138467 kubelet[2782]: E0430 03:38:11.137129 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.138467 kubelet[2782]: E0430 03:38:11.137321 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.138467 kubelet[2782]: W0430 03:38:11.137329 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.138467 kubelet[2782]: E0430 03:38:11.137339 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.138467 kubelet[2782]: E0430 03:38:11.137517 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.138467 kubelet[2782]: W0430 03:38:11.137525 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.138467 kubelet[2782]: E0430 03:38:11.137535 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.139250 kubelet[2782]: E0430 03:38:11.137751 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.139250 kubelet[2782]: W0430 03:38:11.137760 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.139250 kubelet[2782]: E0430 03:38:11.137769 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.139250 kubelet[2782]: E0430 03:38:11.137990 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.139250 kubelet[2782]: W0430 03:38:11.137999 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.139250 kubelet[2782]: E0430 03:38:11.138010 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.139250 kubelet[2782]: E0430 03:38:11.138395 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.139250 kubelet[2782]: W0430 03:38:11.138413 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.139250 kubelet[2782]: E0430 03:38:11.138426 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.139250 kubelet[2782]: E0430 03:38:11.138764 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.140096 kubelet[2782]: W0430 03:38:11.138775 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.140096 kubelet[2782]: E0430 03:38:11.138786 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.140096 kubelet[2782]: E0430 03:38:11.139003 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.140096 kubelet[2782]: W0430 03:38:11.139015 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.140096 kubelet[2782]: E0430 03:38:11.139026 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.140096 kubelet[2782]: E0430 03:38:11.139226 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.140096 kubelet[2782]: W0430 03:38:11.139235 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.140096 kubelet[2782]: E0430 03:38:11.139244 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.140096 kubelet[2782]: E0430 03:38:11.139449 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.140096 kubelet[2782]: W0430 03:38:11.139458 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.140798 kubelet[2782]: E0430 03:38:11.139467 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.140798 kubelet[2782]: E0430 03:38:11.139670 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.140798 kubelet[2782]: W0430 03:38:11.139692 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.140798 kubelet[2782]: E0430 03:38:11.139703 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.140798 kubelet[2782]: E0430 03:38:11.139968 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.140798 kubelet[2782]: W0430 03:38:11.139979 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.140798 kubelet[2782]: E0430 03:38:11.140028 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.194081 kubelet[2782]: E0430 03:38:11.194027 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.194081 kubelet[2782]: W0430 03:38:11.194062 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.194081 kubelet[2782]: E0430 03:38:11.194090 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.194632 kubelet[2782]: E0430 03:38:11.194462 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.194632 kubelet[2782]: W0430 03:38:11.194530 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.194632 kubelet[2782]: E0430 03:38:11.194555 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.195071 kubelet[2782]: E0430 03:38:11.195039 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.195071 kubelet[2782]: W0430 03:38:11.195059 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.195071 kubelet[2782]: E0430 03:38:11.195082 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.195599 kubelet[2782]: E0430 03:38:11.195530 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.195599 kubelet[2782]: W0430 03:38:11.195562 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.195784 kubelet[2782]: E0430 03:38:11.195597 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.196205 kubelet[2782]: E0430 03:38:11.196026 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.196205 kubelet[2782]: W0430 03:38:11.196047 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.196205 kubelet[2782]: E0430 03:38:11.196074 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.196702 kubelet[2782]: E0430 03:38:11.196348 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.196702 kubelet[2782]: W0430 03:38:11.196362 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.196702 kubelet[2782]: E0430 03:38:11.196406 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.196968 kubelet[2782]: E0430 03:38:11.196910 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.196968 kubelet[2782]: W0430 03:38:11.196934 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.197174 kubelet[2782]: E0430 03:38:11.197100 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.197359 kubelet[2782]: E0430 03:38:11.197306 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.197423 kubelet[2782]: W0430 03:38:11.197367 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.197488 kubelet[2782]: E0430 03:38:11.197465 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.197935 kubelet[2782]: E0430 03:38:11.197909 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.198052 kubelet[2782]: W0430 03:38:11.197936 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.198052 kubelet[2782]: E0430 03:38:11.197981 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.198695 kubelet[2782]: E0430 03:38:11.198540 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.198695 kubelet[2782]: W0430 03:38:11.198581 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.198695 kubelet[2782]: E0430 03:38:11.198630 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.199123 kubelet[2782]: E0430 03:38:11.198938 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.199123 kubelet[2782]: W0430 03:38:11.198953 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.199123 kubelet[2782]: E0430 03:38:11.199023 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.199382 kubelet[2782]: E0430 03:38:11.199178 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.199382 kubelet[2782]: W0430 03:38:11.199191 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.199562 kubelet[2782]: E0430 03:38:11.199378 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.199562 kubelet[2782]: E0430 03:38:11.199551 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.199714 kubelet[2782]: W0430 03:38:11.199564 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.199714 kubelet[2782]: E0430 03:38:11.199586 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.199947 kubelet[2782]: E0430 03:38:11.199876 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.199947 kubelet[2782]: W0430 03:38:11.199896 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.199947 kubelet[2782]: E0430 03:38:11.199934 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.200435 kubelet[2782]: E0430 03:38:11.200381 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.200435 kubelet[2782]: W0430 03:38:11.200405 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.200547 kubelet[2782]: E0430 03:38:11.200435 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.200919 kubelet[2782]: E0430 03:38:11.200882 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.200919 kubelet[2782]: W0430 03:38:11.200909 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.201135 kubelet[2782]: E0430 03:38:11.200961 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.201807 kubelet[2782]: E0430 03:38:11.201765 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.201807 kubelet[2782]: W0430 03:38:11.201795 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.201952 kubelet[2782]: E0430 03:38:11.201930 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.202246 kubelet[2782]: E0430 03:38:11.202173 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:38:11.202246 kubelet[2782]: W0430 03:38:11.202194 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:38:11.202246 kubelet[2782]: E0430 03:38:11.202212 2782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:38:11.890335 containerd[1507]: time="2025-04-30T03:38:11.890260862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:11.891602 containerd[1507]: time="2025-04-30T03:38:11.891527051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:38:11.892867 containerd[1507]: time="2025-04-30T03:38:11.892776417Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:11.894955 containerd[1507]: time="2025-04-30T03:38:11.894896478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:11.896130 containerd[1507]: time="2025-04-30T03:38:11.895329992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.397545566s" Apr 30 03:38:11.896130 containerd[1507]: time="2025-04-30T03:38:11.895369477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:38:11.898207 containerd[1507]: time="2025-04-30T03:38:11.898089555Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:38:11.913493 containerd[1507]: time="2025-04-30T03:38:11.913422931Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce\"" Apr 30 03:38:11.914830 containerd[1507]: time="2025-04-30T03:38:11.914182207Z" level=info msg="StartContainer for \"aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce\"" Apr 30 03:38:11.933911 kubelet[2782]: E0430 03:38:11.931895 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:11.954985 systemd[1]: Started cri-containerd-aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce.scope - libcontainer container aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce. Apr 30 03:38:11.994331 containerd[1507]: time="2025-04-30T03:38:11.994294596Z" level=info msg="StartContainer for \"aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce\" returns successfully" Apr 30 03:38:12.007438 systemd[1]: cri-containerd-aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce.scope: Deactivated successfully. Apr 30 03:38:12.027136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce-rootfs.mount: Deactivated successfully. Apr 30 03:38:12.058721 containerd[1507]: time="2025-04-30T03:38:12.043546061Z" level=info msg="shim disconnected" id=aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce namespace=k8s.io Apr 30 03:38:12.058721 containerd[1507]: time="2025-04-30T03:38:12.058703648Z" level=warning msg="cleaning up after shim disconnected" id=aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce namespace=k8s.io Apr 30 03:38:12.058721 containerd[1507]: time="2025-04-30T03:38:12.058719177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:12.066767 kubelet[2782]: I0430 03:38:12.065969 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9dff8546c-chpw8" podStartSLOduration=3.207377651 podStartE2EDuration="6.065951539s" podCreationTimestamp="2025-04-30 03:38:06 +0000 UTC" firstStartedPulling="2025-04-30 03:38:06.638363428 +0000 UTC m=+26.808895070" lastFinishedPulling="2025-04-30 03:38:09.496937316 +0000 UTC m=+29.667468958" observedRunningTime="2025-04-30 03:38:10.075865212 +0000 UTC m=+30.246396894" watchObservedRunningTime="2025-04-30 03:38:12.065951539 +0000 UTC m=+32.236483181" Apr 30 03:38:13.090174 containerd[1507]: time="2025-04-30T03:38:13.089806128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:38:13.933891 kubelet[2782]: E0430 03:38:13.932213 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:15.932342 kubelet[2782]: E0430 03:38:15.932284 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:17.933021 kubelet[2782]: E0430 03:38:17.932028 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:18.246253 containerd[1507]: time="2025-04-30T03:38:18.246170961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:18.247711 containerd[1507]: time="2025-04-30T03:38:18.247640341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:38:18.248945 containerd[1507]: time="2025-04-30T03:38:18.248870843Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:18.251308 containerd[1507]: time="2025-04-30T03:38:18.251239972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:18.253172 containerd[1507]: time="2025-04-30T03:38:18.253129992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.163273399s" Apr 30 03:38:18.253248 containerd[1507]: time="2025-04-30T03:38:18.253175197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:38:18.256183 containerd[1507]: time="2025-04-30T03:38:18.256139866Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:38:18.304119 containerd[1507]: time="2025-04-30T03:38:18.303944668Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd\"" Apr 30 03:38:18.305796 containerd[1507]: time="2025-04-30T03:38:18.305541196Z" level=info msg="StartContainer for \"02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd\"" Apr 30 03:38:18.380503 systemd[1]: run-containerd-runc-k8s.io-02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd-runc.EcLXjZ.mount: Deactivated successfully. Apr 30 03:38:18.387746 systemd[1]: Started cri-containerd-02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd.scope - libcontainer container 02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd. Apr 30 03:38:18.413204 containerd[1507]: time="2025-04-30T03:38:18.413153635Z" level=info msg="StartContainer for \"02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd\" returns successfully" Apr 30 03:38:18.815229 systemd[1]: cri-containerd-02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd.scope: Deactivated successfully. Apr 30 03:38:18.870286 kubelet[2782]: I0430 03:38:18.869660 2782 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:38:18.902909 containerd[1507]: time="2025-04-30T03:38:18.902779236Z" level=info msg="shim disconnected" id=02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd namespace=k8s.io Apr 30 03:38:18.902909 containerd[1507]: time="2025-04-30T03:38:18.902851912Z" level=warning msg="cleaning up after shim disconnected" id=02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd namespace=k8s.io Apr 30 03:38:18.902909 containerd[1507]: time="2025-04-30T03:38:18.902862301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:18.919598 kubelet[2782]: I0430 03:38:18.919534 2782 topology_manager.go:215] "Topology Admit Handler" podUID="9c996608-b929-44c2-8ad9-06eca9de0734" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6z8bp" Apr 30 03:38:18.925446 kubelet[2782]: I0430 03:38:18.923195 2782 topology_manager.go:215] "Topology Admit Handler" podUID="c9c80f55-2e27-4f48-8d62-0a63647aa84e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qhm6w" Apr 30 03:38:18.929090 kubelet[2782]: I0430 03:38:18.928945 2782 topology_manager.go:215] "Topology Admit Handler" podUID="bba0c868-51a2-4e1d-8831-4113d7d9bdcd" podNamespace="calico-system" podName="calico-kube-controllers-6c8cd954d7-t67wf" Apr 30 03:38:18.932491 kubelet[2782]: W0430 03:38:18.932306 2782 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-3-9-5ae3ade3a2" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-9-5ae3ade3a2' and this object Apr 30 03:38:18.932751 kubelet[2782]: E0430 03:38:18.932740 2782 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-3-9-5ae3ade3a2" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-9-5ae3ade3a2' and this object Apr 30 03:38:18.934901 kubelet[2782]: I0430 03:38:18.934080 2782 topology_manager.go:215] "Topology Admit Handler" podUID="19f198dd-9e67-40cc-982d-d60b573a979a" podNamespace="calico-apiserver" podName="calico-apiserver-6d7977669f-tbr7c" Apr 30 03:38:18.934334 systemd[1]: Created slice kubepods-burstable-pod9c996608_b929_44c2_8ad9_06eca9de0734.slice - libcontainer container kubepods-burstable-pod9c996608_b929_44c2_8ad9_06eca9de0734.slice. Apr 30 03:38:18.937252 kubelet[2782]: I0430 03:38:18.937118 2782 topology_manager.go:215] "Topology Admit Handler" podUID="1a0a5d29-6bdf-4990-88c7-d46de9879e7c" podNamespace="calico-apiserver" podName="calico-apiserver-6d7977669f-4lt2m" Apr 30 03:38:18.950880 systemd[1]: Created slice kubepods-burstable-podc9c80f55_2e27_4f48_8d62_0a63647aa84e.slice - libcontainer container kubepods-burstable-podc9c80f55_2e27_4f48_8d62_0a63647aa84e.slice. Apr 30 03:38:18.956149 systemd[1]: Created slice kubepods-besteffort-podbba0c868_51a2_4e1d_8831_4113d7d9bdcd.slice - libcontainer container kubepods-besteffort-podbba0c868_51a2_4e1d_8831_4113d7d9bdcd.slice. Apr 30 03:38:18.963697 systemd[1]: Created slice kubepods-besteffort-pod19f198dd_9e67_40cc_982d_d60b573a979a.slice - libcontainer container kubepods-besteffort-pod19f198dd_9e67_40cc_982d_d60b573a979a.slice. Apr 30 03:38:18.969269 systemd[1]: Created slice kubepods-besteffort-pod1a0a5d29_6bdf_4990_88c7_d46de9879e7c.slice - libcontainer container kubepods-besteffort-pod1a0a5d29_6bdf_4990_88c7_d46de9879e7c.slice. Apr 30 03:38:19.047912 kubelet[2782]: I0430 03:38:19.047824 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pdf\" (UniqueName: \"kubernetes.io/projected/bba0c868-51a2-4e1d-8831-4113d7d9bdcd-kube-api-access-r2pdf\") pod \"calico-kube-controllers-6c8cd954d7-t67wf\" (UID: \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\") " pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" Apr 30 03:38:19.047912 kubelet[2782]: I0430 03:38:19.047899 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9c80f55-2e27-4f48-8d62-0a63647aa84e-config-volume\") pod \"coredns-7db6d8ff4d-qhm6w\" (UID: \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\") " pod="kube-system/coredns-7db6d8ff4d-qhm6w" Apr 30 03:38:19.048134 kubelet[2782]: I0430 03:38:19.047941 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88r6\" (UniqueName: \"kubernetes.io/projected/c9c80f55-2e27-4f48-8d62-0a63647aa84e-kube-api-access-v88r6\") pod \"coredns-7db6d8ff4d-qhm6w\" (UID: \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\") " pod="kube-system/coredns-7db6d8ff4d-qhm6w" Apr 30 03:38:19.048134 kubelet[2782]: I0430 03:38:19.047975 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1a0a5d29-6bdf-4990-88c7-d46de9879e7c-calico-apiserver-certs\") pod \"calico-apiserver-6d7977669f-4lt2m\" (UID: \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\") " pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" Apr 30 03:38:19.048134 kubelet[2782]: I0430 03:38:19.047994 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8tsg\" (UniqueName: \"kubernetes.io/projected/1a0a5d29-6bdf-4990-88c7-d46de9879e7c-kube-api-access-b8tsg\") pod \"calico-apiserver-6d7977669f-4lt2m\" (UID: \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\") " pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" Apr 30 03:38:19.048134 kubelet[2782]: I0430 03:38:19.048014 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c996608-b929-44c2-8ad9-06eca9de0734-config-volume\") pod \"coredns-7db6d8ff4d-6z8bp\" (UID: \"9c996608-b929-44c2-8ad9-06eca9de0734\") " pod="kube-system/coredns-7db6d8ff4d-6z8bp" Apr 30 03:38:19.048134 kubelet[2782]: I0430 03:38:19.048035 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bba0c868-51a2-4e1d-8831-4113d7d9bdcd-tigera-ca-bundle\") pod \"calico-kube-controllers-6c8cd954d7-t67wf\" (UID: \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\") " pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" Apr 30 03:38:19.048338 kubelet[2782]: I0430 03:38:19.048054 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19f198dd-9e67-40cc-982d-d60b573a979a-calico-apiserver-certs\") pod \"calico-apiserver-6d7977669f-tbr7c\" (UID: \"19f198dd-9e67-40cc-982d-d60b573a979a\") " pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" Apr 30 03:38:19.048338 kubelet[2782]: I0430 03:38:19.048073 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bpj\" (UniqueName: \"kubernetes.io/projected/19f198dd-9e67-40cc-982d-d60b573a979a-kube-api-access-b4bpj\") pod \"calico-apiserver-6d7977669f-tbr7c\" (UID: \"19f198dd-9e67-40cc-982d-d60b573a979a\") " pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" Apr 30 03:38:19.048338 kubelet[2782]: I0430 03:38:19.048104 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gccmc\" (UniqueName: \"kubernetes.io/projected/9c996608-b929-44c2-8ad9-06eca9de0734-kube-api-access-gccmc\") pod \"coredns-7db6d8ff4d-6z8bp\" (UID: \"9c996608-b929-44c2-8ad9-06eca9de0734\") " pod="kube-system/coredns-7db6d8ff4d-6z8bp" Apr 30 03:38:19.111558 containerd[1507]: time="2025-04-30T03:38:19.111280496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:38:19.261337 containerd[1507]: time="2025-04-30T03:38:19.261220745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8cd954d7-t67wf,Uid:bba0c868-51a2-4e1d-8831-4113d7d9bdcd,Namespace:calico-system,Attempt:0,}" Apr 30 03:38:19.267238 containerd[1507]: time="2025-04-30T03:38:19.267154811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7977669f-tbr7c,Uid:19f198dd-9e67-40cc-982d-d60b573a979a,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:38:19.273310 containerd[1507]: time="2025-04-30T03:38:19.273241304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7977669f-4lt2m,Uid:1a0a5d29-6bdf-4990-88c7-d46de9879e7c,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:38:19.320655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd-rootfs.mount: Deactivated successfully. Apr 30 03:38:19.550074 containerd[1507]: time="2025-04-30T03:38:19.549991102Z" level=error msg="Failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.556342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3-shm.mount: Deactivated successfully. Apr 30 03:38:19.558261 containerd[1507]: time="2025-04-30T03:38:19.556792428Z" level=error msg="Failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.560091 containerd[1507]: time="2025-04-30T03:38:19.559850311Z" level=error msg="encountered an error cleaning up failed sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.560091 containerd[1507]: time="2025-04-30T03:38:19.559903901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7977669f-4lt2m,Uid:1a0a5d29-6bdf-4990-88c7-d46de9879e7c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.560501 containerd[1507]: time="2025-04-30T03:38:19.560391468Z" level=error msg="encountered an error cleaning up failed sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.560501 containerd[1507]: time="2025-04-30T03:38:19.560436993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8cd954d7-t67wf,Uid:bba0c868-51a2-4e1d-8831-4113d7d9bdcd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.566094 containerd[1507]: time="2025-04-30T03:38:19.565594951Z" level=error msg="Failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.566094 containerd[1507]: time="2025-04-30T03:38:19.565970648Z" level=error msg="encountered an error cleaning up failed sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.566094 containerd[1507]: time="2025-04-30T03:38:19.566016634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7977669f-tbr7c,Uid:19f198dd-9e67-40cc-982d-d60b573a979a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.566995 kubelet[2782]: E0430 03:38:19.566487 2782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.566995 kubelet[2782]: E0430 03:38:19.566571 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" Apr 30 03:38:19.566995 kubelet[2782]: E0430 03:38:19.566590 2782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" Apr 30 03:38:19.567097 kubelet[2782]: E0430 03:38:19.566635 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d7977669f-tbr7c_calico-apiserver(19f198dd-9e67-40cc-982d-d60b573a979a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d7977669f-tbr7c_calico-apiserver(19f198dd-9e67-40cc-982d-d60b573a979a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" podUID="19f198dd-9e67-40cc-982d-d60b573a979a" Apr 30 03:38:19.567097 kubelet[2782]: E0430 03:38:19.566843 2782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.567097 kubelet[2782]: E0430 03:38:19.566860 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" Apr 30 03:38:19.567181 kubelet[2782]: E0430 03:38:19.566872 2782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" Apr 30 03:38:19.567181 kubelet[2782]: E0430 03:38:19.566892 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d7977669f-4lt2m_calico-apiserver(1a0a5d29-6bdf-4990-88c7-d46de9879e7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d7977669f-4lt2m_calico-apiserver(1a0a5d29-6bdf-4990-88c7-d46de9879e7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" podUID="1a0a5d29-6bdf-4990-88c7-d46de9879e7c" Apr 30 03:38:19.567181 kubelet[2782]: E0430 03:38:19.566920 2782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:19.567260 kubelet[2782]: E0430 03:38:19.566936 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" Apr 30 03:38:19.567260 kubelet[2782]: E0430 03:38:19.566949 2782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" Apr 30 03:38:19.567260 kubelet[2782]: E0430 03:38:19.566969 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c8cd954d7-t67wf_calico-system(bba0c868-51a2-4e1d-8831-4113d7d9bdcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c8cd954d7-t67wf_calico-system(bba0c868-51a2-4e1d-8831-4113d7d9bdcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" podUID="bba0c868-51a2-4e1d-8831-4113d7d9bdcd" Apr 30 03:38:19.939294 systemd[1]: Created slice kubepods-besteffort-pod4d11b5f7_a801_4c03_8af8_692f5d9587bd.slice - libcontainer container kubepods-besteffort-pod4d11b5f7_a801_4c03_8af8_692f5d9587bd.slice. Apr 30 03:38:19.943476 containerd[1507]: time="2025-04-30T03:38:19.943433309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjjf8,Uid:4d11b5f7-a801-4c03-8af8-692f5d9587bd,Namespace:calico-system,Attempt:0,}" Apr 30 03:38:20.030823 containerd[1507]: time="2025-04-30T03:38:20.030702691Z" level=error msg="Failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.031190 containerd[1507]: time="2025-04-30T03:38:20.031149821Z" level=error msg="encountered an error cleaning up failed sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.031332 containerd[1507]: time="2025-04-30T03:38:20.031222577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjjf8,Uid:4d11b5f7-a801-4c03-8af8-692f5d9587bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.053578 kubelet[2782]: E0430 03:38:20.053496 2782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.054084 kubelet[2782]: E0430 03:38:20.053587 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjjf8" Apr 30 03:38:20.054084 kubelet[2782]: E0430 03:38:20.053652 2782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjjf8" Apr 30 03:38:20.054084 kubelet[2782]: E0430 03:38:20.053728 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rjjf8_calico-system(4d11b5f7-a801-4c03-8af8-692f5d9587bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rjjf8_calico-system(4d11b5f7-a801-4c03-8af8-692f5d9587bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:20.113384 kubelet[2782]: I0430 03:38:20.113304 2782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:38:20.119269 kubelet[2782]: I0430 03:38:20.118677 2782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:38:20.121689 containerd[1507]: time="2025-04-30T03:38:20.121651067Z" level=info msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" Apr 30 03:38:20.128532 containerd[1507]: time="2025-04-30T03:38:20.128475134Z" level=info msg="Ensure that sandbox eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0 in task-service has been cleanup successfully" Apr 30 03:38:20.130840 containerd[1507]: time="2025-04-30T03:38:20.130750629Z" level=info msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" Apr 30 03:38:20.131147 containerd[1507]: time="2025-04-30T03:38:20.131109713Z" level=info msg="Ensure that sandbox ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3 in task-service has been cleanup successfully" Apr 30 03:38:20.134637 kubelet[2782]: I0430 03:38:20.134188 2782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:38:20.136701 containerd[1507]: time="2025-04-30T03:38:20.136604163Z" level=info msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" Apr 30 03:38:20.136996 containerd[1507]: time="2025-04-30T03:38:20.136934163Z" level=info msg="Ensure that sandbox 604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea in task-service has been cleanup successfully" Apr 30 03:38:20.142879 kubelet[2782]: I0430 03:38:20.142856 2782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:38:20.146595 containerd[1507]: time="2025-04-30T03:38:20.145889836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6z8bp,Uid:9c996608-b929-44c2-8ad9-06eca9de0734,Namespace:kube-system,Attempt:0,}" Apr 30 03:38:20.149664 containerd[1507]: time="2025-04-30T03:38:20.149514874Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:38:20.150683 containerd[1507]: time="2025-04-30T03:38:20.150642663Z" level=info msg="Ensure that sandbox ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49 in task-service has been cleanup successfully" Apr 30 03:38:20.157986 containerd[1507]: time="2025-04-30T03:38:20.157928689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhm6w,Uid:c9c80f55-2e27-4f48-8d62-0a63647aa84e,Namespace:kube-system,Attempt:0,}" Apr 30 03:38:20.212007 containerd[1507]: time="2025-04-30T03:38:20.211881433Z" level=error msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" failed" error="failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.212337 kubelet[2782]: E0430 03:38:20.212294 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:38:20.212406 kubelet[2782]: E0430 03:38:20.212356 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3"} Apr 30 03:38:20.212432 kubelet[2782]: E0430 03:38:20.212413 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:20.212513 kubelet[2782]: E0430 03:38:20.212441 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" podUID="bba0c868-51a2-4e1d-8831-4113d7d9bdcd" Apr 30 03:38:20.238489 containerd[1507]: time="2025-04-30T03:38:20.238410454Z" level=error msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" failed" error="failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.239985 kubelet[2782]: E0430 03:38:20.238699 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:38:20.239985 kubelet[2782]: E0430 03:38:20.238751 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0"} Apr 30 03:38:20.239985 kubelet[2782]: E0430 03:38:20.238835 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:20.239985 kubelet[2782]: E0430 03:38:20.238862 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:20.242949 containerd[1507]: time="2025-04-30T03:38:20.242891741Z" level=error msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" failed" error="failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.243190 kubelet[2782]: E0430 03:38:20.243150 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:38:20.243595 kubelet[2782]: E0430 03:38:20.243207 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49"} Apr 30 03:38:20.243595 kubelet[2782]: E0430 03:38:20.243241 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:20.243595 kubelet[2782]: E0430 03:38:20.243264 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" podUID="1a0a5d29-6bdf-4990-88c7-d46de9879e7c" Apr 30 03:38:20.249008 containerd[1507]: time="2025-04-30T03:38:20.248972183Z" level=error msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" failed" error="failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.249347 kubelet[2782]: E0430 03:38:20.249309 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:38:20.249409 kubelet[2782]: E0430 03:38:20.249358 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea"} Apr 30 03:38:20.249409 kubelet[2782]: E0430 03:38:20.249385 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19f198dd-9e67-40cc-982d-d60b573a979a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:20.249509 kubelet[2782]: E0430 03:38:20.249412 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19f198dd-9e67-40cc-982d-d60b573a979a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" podUID="19f198dd-9e67-40cc-982d-d60b573a979a" Apr 30 03:38:20.273747 containerd[1507]: time="2025-04-30T03:38:20.273699690Z" level=error msg="Failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.274099 containerd[1507]: time="2025-04-30T03:38:20.274000805Z" level=error msg="encountered an error cleaning up failed sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.274099 containerd[1507]: time="2025-04-30T03:38:20.274044287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6z8bp,Uid:9c996608-b929-44c2-8ad9-06eca9de0734,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.274404 kubelet[2782]: E0430 03:38:20.274240 2782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.274404 kubelet[2782]: E0430 03:38:20.274285 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6z8bp" Apr 30 03:38:20.274404 kubelet[2782]: E0430 03:38:20.274302 2782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6z8bp" Apr 30 03:38:20.274670 kubelet[2782]: E0430 03:38:20.274337 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6z8bp_kube-system(9c996608-b929-44c2-8ad9-06eca9de0734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6z8bp_kube-system(9c996608-b929-44c2-8ad9-06eca9de0734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6z8bp" podUID="9c996608-b929-44c2-8ad9-06eca9de0734" Apr 30 03:38:20.288959 containerd[1507]: time="2025-04-30T03:38:20.288903369Z" level=error msg="Failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.289275 containerd[1507]: time="2025-04-30T03:38:20.289244439Z" level=error msg="encountered an error cleaning up failed sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.289329 containerd[1507]: time="2025-04-30T03:38:20.289299322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhm6w,Uid:c9c80f55-2e27-4f48-8d62-0a63647aa84e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.289534 kubelet[2782]: E0430 03:38:20.289499 2782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:20.289581 kubelet[2782]: E0430 03:38:20.289560 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhm6w" Apr 30 03:38:20.289643 kubelet[2782]: E0430 03:38:20.289579 2782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhm6w" Apr 30 03:38:20.289697 kubelet[2782]: E0430 03:38:20.289669 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qhm6w_kube-system(c9c80f55-2e27-4f48-8d62-0a63647aa84e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qhm6w_kube-system(c9c80f55-2e27-4f48-8d62-0a63647aa84e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhm6w" podUID="c9c80f55-2e27-4f48-8d62-0a63647aa84e" Apr 30 03:38:20.299295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49-shm.mount: Deactivated successfully. Apr 30 03:38:20.299377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea-shm.mount: Deactivated successfully. Apr 30 03:38:21.147524 kubelet[2782]: I0430 03:38:21.147456 2782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:38:21.149949 containerd[1507]: time="2025-04-30T03:38:21.148444995Z" level=info msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" Apr 30 03:38:21.149949 containerd[1507]: time="2025-04-30T03:38:21.148730171Z" level=info msg="Ensure that sandbox 89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc in task-service has been cleanup successfully" Apr 30 03:38:21.153857 kubelet[2782]: I0430 03:38:21.153644 2782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:38:21.155481 containerd[1507]: time="2025-04-30T03:38:21.155412022Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:38:21.156293 containerd[1507]: time="2025-04-30T03:38:21.156247852Z" level=info msg="Ensure that sandbox c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098 in task-service has been cleanup successfully" Apr 30 03:38:21.220165 containerd[1507]: time="2025-04-30T03:38:21.220092007Z" level=error msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" failed" error="failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:21.220536 kubelet[2782]: E0430 03:38:21.220485 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:38:21.220740 kubelet[2782]: E0430 03:38:21.220567 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098"} Apr 30 03:38:21.220740 kubelet[2782]: E0430 03:38:21.220639 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:21.220740 kubelet[2782]: E0430 03:38:21.220671 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6z8bp" podUID="9c996608-b929-44c2-8ad9-06eca9de0734" Apr 30 03:38:21.222605 containerd[1507]: time="2025-04-30T03:38:21.222478821Z" level=error msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" failed" error="failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:21.222771 kubelet[2782]: E0430 03:38:21.222707 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:38:21.222771 kubelet[2782]: E0430 03:38:21.222759 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc"} Apr 30 03:38:21.222872 kubelet[2782]: E0430 03:38:21.222791 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:21.222872 kubelet[2782]: E0430 03:38:21.222828 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhm6w" podUID="c9c80f55-2e27-4f48-8d62-0a63647aa84e" Apr 30 03:38:22.452349 kubelet[2782]: I0430 03:38:22.452270 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:38:26.392714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326177219.mount: Deactivated successfully. Apr 30 03:38:26.519308 containerd[1507]: time="2025-04-30T03:38:26.492852936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:38:26.526581 containerd[1507]: time="2025-04-30T03:38:26.510430938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.392956885s" Apr 30 03:38:26.526581 containerd[1507]: time="2025-04-30T03:38:26.526481309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:38:26.543824 containerd[1507]: time="2025-04-30T03:38:26.543557979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:26.577403 containerd[1507]: time="2025-04-30T03:38:26.576972202Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:26.577530 containerd[1507]: time="2025-04-30T03:38:26.577446962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:26.593407 containerd[1507]: time="2025-04-30T03:38:26.593222489Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:38:26.645005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715762102.mount: Deactivated successfully. Apr 30 03:38:26.671056 containerd[1507]: time="2025-04-30T03:38:26.670962049Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0\"" Apr 30 03:38:26.679464 containerd[1507]: time="2025-04-30T03:38:26.679403086Z" level=info msg="StartContainer for \"a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0\"" Apr 30 03:38:26.822971 systemd[1]: Started cri-containerd-a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0.scope - libcontainer container a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0. Apr 30 03:38:26.868501 containerd[1507]: time="2025-04-30T03:38:26.867961839Z" level=info msg="StartContainer for \"a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0\" returns successfully" Apr 30 03:38:26.950920 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:38:26.951024 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:38:26.979251 systemd[1]: cri-containerd-a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0.scope: Deactivated successfully. Apr 30 03:38:27.009759 containerd[1507]: time="2025-04-30T03:38:27.009678717Z" level=info msg="shim disconnected" id=a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0 namespace=k8s.io Apr 30 03:38:27.009759 containerd[1507]: time="2025-04-30T03:38:27.009748768Z" level=warning msg="cleaning up after shim disconnected" id=a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0 namespace=k8s.io Apr 30 03:38:27.009759 containerd[1507]: time="2025-04-30T03:38:27.009759939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:27.228859 kubelet[2782]: I0430 03:38:27.228695 2782 scope.go:117] "RemoveContainer" containerID="a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0" Apr 30 03:38:27.253146 containerd[1507]: time="2025-04-30T03:38:27.252801801Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Apr 30 03:38:27.272969 containerd[1507]: time="2025-04-30T03:38:27.272853852Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220\"" Apr 30 03:38:27.273584 containerd[1507]: time="2025-04-30T03:38:27.273529331Z" level=info msg="StartContainer for \"8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220\"" Apr 30 03:38:27.308903 systemd[1]: Started cri-containerd-8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220.scope - libcontainer container 8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220. Apr 30 03:38:27.345674 containerd[1507]: time="2025-04-30T03:38:27.345350685Z" level=info msg="StartContainer for \"8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220\" returns successfully" Apr 30 03:38:27.431419 systemd[1]: cri-containerd-8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220.scope: Deactivated successfully. Apr 30 03:38:27.458820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220-rootfs.mount: Deactivated successfully. Apr 30 03:38:27.467020 containerd[1507]: time="2025-04-30T03:38:27.466948421Z" level=info msg="shim disconnected" id=8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220 namespace=k8s.io Apr 30 03:38:27.467020 containerd[1507]: time="2025-04-30T03:38:27.467009826Z" level=warning msg="cleaning up after shim disconnected" id=8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220 namespace=k8s.io Apr 30 03:38:27.467020 containerd[1507]: time="2025-04-30T03:38:27.467019344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:27.477930 containerd[1507]: time="2025-04-30T03:38:27.477843207Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:38:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:38:28.177604 kubelet[2782]: I0430 03:38:28.177555 2782 scope.go:117] "RemoveContainer" containerID="a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0" Apr 30 03:38:28.177604 kubelet[2782]: I0430 03:38:28.178087 2782 scope.go:117] "RemoveContainer" containerID="8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220" Apr 30 03:38:28.177604 kubelet[2782]: E0430 03:38:28.178733 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-958wz_calico-system(bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02)\"" pod="calico-system/calico-node-958wz" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" Apr 30 03:38:28.224752 containerd[1507]: time="2025-04-30T03:38:28.224677929Z" level=info msg="RemoveContainer for \"a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0\"" Apr 30 03:38:28.239623 containerd[1507]: time="2025-04-30T03:38:28.239576257Z" level=info msg="RemoveContainer for \"a7deb21bca6259da10762ef2db6a129ce0787c2e3c35c9829620b5db6a8bdef0\" returns successfully" Apr 30 03:38:29.192211 kubelet[2782]: I0430 03:38:29.191118 2782 scope.go:117] "RemoveContainer" containerID="8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220" Apr 30 03:38:29.192211 kubelet[2782]: E0430 03:38:29.191924 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-958wz_calico-system(bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02)\"" pod="calico-system/calico-node-958wz" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" Apr 30 03:38:31.937445 containerd[1507]: time="2025-04-30T03:38:31.936778537Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:38:31.980408 containerd[1507]: time="2025-04-30T03:38:31.980281006Z" level=error msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" failed" error="failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:31.980698 kubelet[2782]: E0430 03:38:31.980589 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:38:31.981168 kubelet[2782]: E0430 03:38:31.980697 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49"} Apr 30 03:38:31.981168 kubelet[2782]: E0430 03:38:31.980742 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:31.981168 kubelet[2782]: E0430 03:38:31.980779 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" podUID="1a0a5d29-6bdf-4990-88c7-d46de9879e7c" Apr 30 03:38:32.932693 containerd[1507]: time="2025-04-30T03:38:32.932557522Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:38:32.980666 containerd[1507]: time="2025-04-30T03:38:32.980508100Z" level=error msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" failed" error="failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:32.981368 kubelet[2782]: E0430 03:38:32.980904 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:38:32.981368 kubelet[2782]: E0430 03:38:32.981002 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098"} Apr 30 03:38:32.981368 kubelet[2782]: E0430 03:38:32.981057 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:32.981368 kubelet[2782]: E0430 03:38:32.981096 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6z8bp" podUID="9c996608-b929-44c2-8ad9-06eca9de0734" Apr 30 03:38:33.934808 containerd[1507]: time="2025-04-30T03:38:33.933027316Z" level=info msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" Apr 30 03:38:33.972491 containerd[1507]: time="2025-04-30T03:38:33.972406662Z" level=error msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" failed" error="failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:33.972818 kubelet[2782]: E0430 03:38:33.972744 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:38:33.972903 kubelet[2782]: E0430 03:38:33.972818 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc"} Apr 30 03:38:33.972903 kubelet[2782]: E0430 03:38:33.972870 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:33.973058 kubelet[2782]: E0430 03:38:33.972910 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhm6w" podUID="c9c80f55-2e27-4f48-8d62-0a63647aa84e" Apr 30 03:38:34.933130 containerd[1507]: time="2025-04-30T03:38:34.932682525Z" level=info msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" Apr 30 03:38:34.934652 containerd[1507]: time="2025-04-30T03:38:34.933812138Z" level=info msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" Apr 30 03:38:34.937456 containerd[1507]: time="2025-04-30T03:38:34.937404857Z" level=info msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" Apr 30 03:38:35.000960 containerd[1507]: time="2025-04-30T03:38:35.000860679Z" level=error msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" failed" error="failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:35.001591 kubelet[2782]: E0430 03:38:35.001551 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:38:35.002051 kubelet[2782]: E0430 03:38:35.002023 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea"} Apr 30 03:38:35.002280 kubelet[2782]: E0430 03:38:35.002193 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19f198dd-9e67-40cc-982d-d60b573a979a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:35.002280 kubelet[2782]: E0430 03:38:35.002244 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19f198dd-9e67-40cc-982d-d60b573a979a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" podUID="19f198dd-9e67-40cc-982d-d60b573a979a" Apr 30 03:38:35.007670 containerd[1507]: time="2025-04-30T03:38:35.007442316Z" level=error msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" failed" error="failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:35.007670 containerd[1507]: time="2025-04-30T03:38:35.007456792Z" level=error msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" failed" error="failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:35.008391 kubelet[2782]: E0430 03:38:35.007771 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:38:35.008391 kubelet[2782]: E0430 03:38:35.007842 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3"} Apr 30 03:38:35.008391 kubelet[2782]: E0430 03:38:35.007841 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:38:35.008391 kubelet[2782]: E0430 03:38:35.007876 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0"} Apr 30 03:38:35.008391 kubelet[2782]: E0430 03:38:35.007879 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:35.008598 kubelet[2782]: E0430 03:38:35.007908 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:35.008598 kubelet[2782]: E0430 03:38:35.007922 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" podUID="bba0c868-51a2-4e1d-8831-4113d7d9bdcd" Apr 30 03:38:35.008598 kubelet[2782]: E0430 03:38:35.007930 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:43.934688 kubelet[2782]: I0430 03:38:43.932699 2782 scope.go:117] "RemoveContainer" containerID="8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220" Apr 30 03:38:43.986885 containerd[1507]: time="2025-04-30T03:38:43.986784674Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Apr 30 03:38:44.010284 containerd[1507]: time="2025-04-30T03:38:44.010154707Z" level=info msg="CreateContainer within sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57\"" Apr 30 03:38:44.012006 containerd[1507]: time="2025-04-30T03:38:44.011346255Z" level=info msg="StartContainer for \"2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57\"" Apr 30 03:38:44.058005 systemd[1]: Started cri-containerd-2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57.scope - libcontainer container 2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57. Apr 30 03:38:44.089240 containerd[1507]: time="2025-04-30T03:38:44.089180888Z" level=info msg="StartContainer for \"2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57\" returns successfully" Apr 30 03:38:44.160779 systemd[1]: cri-containerd-2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57.scope: Deactivated successfully. Apr 30 03:38:44.183378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57-rootfs.mount: Deactivated successfully. Apr 30 03:38:44.200992 containerd[1507]: time="2025-04-30T03:38:44.200847913Z" level=info msg="shim disconnected" id=2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57 namespace=k8s.io Apr 30 03:38:44.200992 containerd[1507]: time="2025-04-30T03:38:44.200942140Z" level=warning msg="cleaning up after shim disconnected" id=2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57 namespace=k8s.io Apr 30 03:38:44.200992 containerd[1507]: time="2025-04-30T03:38:44.200955325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:44.258960 kubelet[2782]: I0430 03:38:44.257545 2782 scope.go:117] "RemoveContainer" containerID="8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220" Apr 30 03:38:44.258960 kubelet[2782]: I0430 03:38:44.257805 2782 scope.go:117] "RemoveContainer" containerID="2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57" Apr 30 03:38:44.258960 kubelet[2782]: E0430 03:38:44.258220 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-958wz_calico-system(bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02)\"" pod="calico-system/calico-node-958wz" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" Apr 30 03:38:44.261580 containerd[1507]: time="2025-04-30T03:38:44.260839698Z" level=info msg="RemoveContainer for \"8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220\"" Apr 30 03:38:44.270303 containerd[1507]: time="2025-04-30T03:38:44.270192463Z" level=info msg="RemoveContainer for \"8a844cb3515c6229193a80edc0e658ffc795cd8bda50b7c7661c4abc51a51220\" returns successfully" Apr 30 03:38:44.933229 containerd[1507]: time="2025-04-30T03:38:44.932944226Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:38:44.959406 containerd[1507]: time="2025-04-30T03:38:44.959338711Z" level=error msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" failed" error="failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:44.960139 kubelet[2782]: E0430 03:38:44.959901 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:38:44.960139 kubelet[2782]: E0430 03:38:44.959976 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098"} Apr 30 03:38:44.960139 kubelet[2782]: E0430 03:38:44.960028 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:44.960139 kubelet[2782]: E0430 03:38:44.960067 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6z8bp" podUID="9c996608-b929-44c2-8ad9-06eca9de0734" Apr 30 03:38:45.933981 containerd[1507]: time="2025-04-30T03:38:45.932928803Z" level=info msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" Apr 30 03:38:45.971177 containerd[1507]: time="2025-04-30T03:38:45.971061726Z" level=error msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" failed" error="failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:45.971500 kubelet[2782]: E0430 03:38:45.971430 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:38:45.971500 kubelet[2782]: E0430 03:38:45.971512 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0"} Apr 30 03:38:45.972038 kubelet[2782]: E0430 03:38:45.971570 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:45.972038 kubelet[2782]: E0430 03:38:45.971686 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:38:46.933466 containerd[1507]: time="2025-04-30T03:38:46.933394642Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:38:46.978909 containerd[1507]: time="2025-04-30T03:38:46.978773611Z" level=error msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" failed" error="failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:46.979888 kubelet[2782]: E0430 03:38:46.979138 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:38:46.979888 kubelet[2782]: E0430 03:38:46.979212 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49"} Apr 30 03:38:46.979888 kubelet[2782]: E0430 03:38:46.979268 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:46.979888 kubelet[2782]: E0430 03:38:46.979306 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" podUID="1a0a5d29-6bdf-4990-88c7-d46de9879e7c" Apr 30 03:38:47.935034 containerd[1507]: time="2025-04-30T03:38:47.933335427Z" level=info msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" Apr 30 03:38:47.935034 containerd[1507]: time="2025-04-30T03:38:47.934098030Z" level=info msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" Apr 30 03:38:47.992912 containerd[1507]: time="2025-04-30T03:38:47.992718122Z" level=error msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" failed" error="failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:47.994092 kubelet[2782]: E0430 03:38:47.993865 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:38:47.994092 kubelet[2782]: E0430 03:38:47.993939 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3"} Apr 30 03:38:47.994092 kubelet[2782]: E0430 03:38:47.994012 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:47.994092 kubelet[2782]: E0430 03:38:47.994052 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" podUID="bba0c868-51a2-4e1d-8831-4113d7d9bdcd" Apr 30 03:38:47.998387 containerd[1507]: time="2025-04-30T03:38:47.998289050Z" level=error msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" failed" error="failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:47.998796 kubelet[2782]: E0430 03:38:47.998592 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:38:47.998796 kubelet[2782]: E0430 03:38:47.998674 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc"} Apr 30 03:38:47.998796 kubelet[2782]: E0430 03:38:47.998717 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:47.998796 kubelet[2782]: E0430 03:38:47.998749 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhm6w" podUID="c9c80f55-2e27-4f48-8d62-0a63647aa84e" Apr 30 03:38:49.935134 containerd[1507]: time="2025-04-30T03:38:49.935046928Z" level=info msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" Apr 30 03:38:49.984206 containerd[1507]: time="2025-04-30T03:38:49.984091829Z" level=error msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" failed" error="failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:49.984551 kubelet[2782]: E0430 03:38:49.984471 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:38:49.985122 kubelet[2782]: E0430 03:38:49.984548 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea"} Apr 30 03:38:49.985122 kubelet[2782]: E0430 03:38:49.984641 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19f198dd-9e67-40cc-982d-d60b573a979a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:49.985122 kubelet[2782]: E0430 03:38:49.984706 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19f198dd-9e67-40cc-982d-d60b573a979a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" podUID="19f198dd-9e67-40cc-982d-d60b573a979a" Apr 30 03:38:50.053089 kubelet[2782]: I0430 03:38:50.053018 2782 scope.go:117] "RemoveContainer" containerID="2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57" Apr 30 03:38:50.054882 kubelet[2782]: E0430 03:38:50.054184 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-958wz_calico-system(bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02)\"" pod="calico-system/calico-node-958wz" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" Apr 30 03:38:56.734582 kubelet[2782]: I0430 03:38:56.734510 2782 scope.go:117] "RemoveContainer" containerID="2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57" Apr 30 03:38:56.747088 kubelet[2782]: E0430 03:38:56.747004 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-958wz_calico-system(bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02)\"" pod="calico-system/calico-node-958wz" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" Apr 30 03:38:56.934231 containerd[1507]: time="2025-04-30T03:38:56.932847225Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:38:56.979903 containerd[1507]: time="2025-04-30T03:38:56.979803575Z" level=error msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" failed" error="failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:56.980208 kubelet[2782]: E0430 03:38:56.980139 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:38:56.980321 kubelet[2782]: E0430 03:38:56.980215 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098"} Apr 30 03:38:56.980321 kubelet[2782]: E0430 03:38:56.980289 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:56.980476 kubelet[2782]: E0430 03:38:56.980387 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6z8bp" podUID="9c996608-b929-44c2-8ad9-06eca9de0734" Apr 30 03:38:57.934605 containerd[1507]: time="2025-04-30T03:38:57.934426129Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:38:57.970995 containerd[1507]: time="2025-04-30T03:38:57.970928905Z" level=error msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" failed" error="failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:57.971231 kubelet[2782]: E0430 03:38:57.971174 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:38:57.971719 kubelet[2782]: E0430 03:38:57.971229 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49"} Apr 30 03:38:57.971719 kubelet[2782]: E0430 03:38:57.971282 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:57.971719 kubelet[2782]: E0430 03:38:57.971314 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" podUID="1a0a5d29-6bdf-4990-88c7-d46de9879e7c" Apr 30 03:38:59.933737 containerd[1507]: time="2025-04-30T03:38:59.933247696Z" level=info msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" Apr 30 03:38:59.938795 containerd[1507]: time="2025-04-30T03:38:59.938161019Z" level=info msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" Apr 30 03:38:59.994401 containerd[1507]: time="2025-04-30T03:38:59.994009332Z" level=error msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" failed" error="failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:59.995025 kubelet[2782]: E0430 03:38:59.994853 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:38:59.995025 kubelet[2782]: E0430 03:38:59.994916 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc"} Apr 30 03:38:59.996182 kubelet[2782]: E0430 03:38:59.994967 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:59.996182 kubelet[2782]: E0430 03:38:59.995195 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9c80f55-2e27-4f48-8d62-0a63647aa84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhm6w" podUID="c9c80f55-2e27-4f48-8d62-0a63647aa84e" Apr 30 03:38:59.996424 containerd[1507]: time="2025-04-30T03:38:59.995597897Z" level=error msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" failed" error="failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:38:59.997356 kubelet[2782]: E0430 03:38:59.996832 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:38:59.997356 kubelet[2782]: E0430 03:38:59.996877 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0"} Apr 30 03:38:59.997356 kubelet[2782]: E0430 03:38:59.996914 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:38:59.997356 kubelet[2782]: E0430 03:38:59.996946 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d11b5f7-a801-4c03-8af8-692f5d9587bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjjf8" podUID="4d11b5f7-a801-4c03-8af8-692f5d9587bd" Apr 30 03:39:01.934109 containerd[1507]: time="2025-04-30T03:39:01.933907362Z" level=info msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" Apr 30 03:39:01.977947 containerd[1507]: time="2025-04-30T03:39:01.977817801Z" level=error msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" failed" error="failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:01.978204 kubelet[2782]: E0430 03:39:01.978142 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:01.978671 kubelet[2782]: E0430 03:39:01.978210 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3"} Apr 30 03:39:01.978671 kubelet[2782]: E0430 03:39:01.978269 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:01.978671 kubelet[2782]: E0430 03:39:01.978325 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bba0c868-51a2-4e1d-8831-4113d7d9bdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" podUID="bba0c868-51a2-4e1d-8831-4113d7d9bdcd" Apr 30 03:39:02.933011 containerd[1507]: time="2025-04-30T03:39:02.932500753Z" level=info msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" Apr 30 03:39:02.974006 containerd[1507]: time="2025-04-30T03:39:02.973902608Z" level=error msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" failed" error="failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:02.974918 kubelet[2782]: E0430 03:39:02.974172 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:02.974918 kubelet[2782]: E0430 03:39:02.974240 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea"} Apr 30 03:39:02.974918 kubelet[2782]: E0430 03:39:02.974291 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19f198dd-9e67-40cc-982d-d60b573a979a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:02.974918 kubelet[2782]: E0430 03:39:02.974348 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19f198dd-9e67-40cc-982d-d60b573a979a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" podUID="19f198dd-9e67-40cc-982d-d60b573a979a" Apr 30 03:39:06.651600 containerd[1507]: time="2025-04-30T03:39:06.651541783Z" level=info msg="StopPodSandbox for \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\"" Apr 30 03:39:06.652014 containerd[1507]: time="2025-04-30T03:39:06.651659651Z" level=info msg="Container to stop \"aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:39:06.652014 containerd[1507]: time="2025-04-30T03:39:06.651685168Z" level=info msg="Container to stop \"2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:39:06.652014 containerd[1507]: time="2025-04-30T03:39:06.651704484Z" level=info msg="Container to stop \"02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:39:06.655049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4-shm.mount: Deactivated successfully. Apr 30 03:39:06.667705 systemd[1]: cri-containerd-8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4.scope: Deactivated successfully. Apr 30 03:39:06.700893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4-rootfs.mount: Deactivated successfully. Apr 30 03:39:06.720217 containerd[1507]: time="2025-04-30T03:39:06.719938313Z" level=info msg="shim disconnected" id=8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4 namespace=k8s.io Apr 30 03:39:06.720217 containerd[1507]: time="2025-04-30T03:39:06.719992172Z" level=warning msg="cleaning up after shim disconnected" id=8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4 namespace=k8s.io Apr 30 03:39:06.720217 containerd[1507]: time="2025-04-30T03:39:06.719999135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:39:06.739597 containerd[1507]: time="2025-04-30T03:39:06.739384633Z" level=info msg="TearDown network for sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" successfully" Apr 30 03:39:06.739597 containerd[1507]: time="2025-04-30T03:39:06.739424256Z" level=info msg="StopPodSandbox for \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" returns successfully" Apr 30 03:39:06.782355 kubelet[2782]: I0430 03:39:06.782164 2782 topology_manager.go:215] "Topology Admit Handler" podUID="f8c663e1-1826-4aa1-84f0-7f78cd166b4b" podNamespace="calico-system" podName="calico-node-2xvwn" Apr 30 03:39:06.782355 kubelet[2782]: E0430 03:39:06.782250 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" containerName="calico-node" Apr 30 03:39:06.782355 kubelet[2782]: E0430 03:39:06.782258 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" containerName="calico-node" Apr 30 03:39:06.782355 kubelet[2782]: E0430 03:39:06.782265 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" containerName="flexvol-driver" Apr 30 03:39:06.782355 kubelet[2782]: E0430 03:39:06.782270 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" containerName="install-cni" Apr 30 03:39:06.782355 kubelet[2782]: E0430 03:39:06.782274 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" containerName="calico-node" Apr 30 03:39:06.793080 kubelet[2782]: I0430 03:39:06.793047 2782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" containerName="calico-node" Apr 30 03:39:06.793238 kubelet[2782]: I0430 03:39:06.793230 2782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" containerName="calico-node" Apr 30 03:39:06.793278 kubelet[2782]: I0430 03:39:06.793272 2782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" containerName="calico-node" Apr 30 03:39:06.798912 kubelet[2782]: I0430 03:39:06.798707 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-policysync\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799086 kubelet[2782]: I0430 03:39:06.799074 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-tigera-ca-bundle\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799177 kubelet[2782]: I0430 03:39:06.799169 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-node-certs\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799227 kubelet[2782]: I0430 03:39:06.799219 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-lib-modules\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799332 kubelet[2782]: I0430 03:39:06.799320 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-var-run-calico\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799388 kubelet[2782]: I0430 03:39:06.799381 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-log-dir\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799436 kubelet[2782]: I0430 03:39:06.799429 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-bin-dir\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799484 kubelet[2782]: I0430 03:39:06.799476 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-net-dir\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799530 kubelet[2782]: I0430 03:39:06.799523 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-xtables-lock\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799586 kubelet[2782]: I0430 03:39:06.799577 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-var-lib-calico\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799649 kubelet[2782]: I0430 03:39:06.799639 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-flexvol-driver-host\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.799704 kubelet[2782]: I0430 03:39:06.799697 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zldnb\" (UniqueName: \"kubernetes.io/projected/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-kube-api-access-zldnb\") pod \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\" (UID: \"bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02\") " Apr 30 03:39:06.800198 kubelet[2782]: I0430 03:39:06.798747 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-policysync" (OuterVolumeSpecName: "policysync") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.800863 kubelet[2782]: I0430 03:39:06.800297 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.803086 systemd[1]: Created slice kubepods-besteffort-podf8c663e1_1826_4aa1_84f0_7f78cd166b4b.slice - libcontainer container kubepods-besteffort-podf8c663e1_1826_4aa1_84f0_7f78cd166b4b.slice. Apr 30 03:39:06.807740 kubelet[2782]: I0430 03:39:06.807714 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-kube-api-access-zldnb" (OuterVolumeSpecName: "kube-api-access-zldnb") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "kube-api-access-zldnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:39:06.807850 kubelet[2782]: I0430 03:39:06.807839 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.807906 kubelet[2782]: I0430 03:39:06.807897 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.807964 kubelet[2782]: I0430 03:39:06.807956 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.808016 kubelet[2782]: I0430 03:39:06.808008 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.808076 kubelet[2782]: I0430 03:39:06.808061 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.810698 kubelet[2782]: I0430 03:39:06.810579 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:39:06.812194 systemd[1]: var-lib-kubelet-pods-bfa7f9ba\x2dcc22\x2d4311\x2d91e3\x2d5a0a4bb7af02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzldnb.mount: Deactivated successfully. Apr 30 03:39:06.815166 kubelet[2782]: I0430 03:39:06.813165 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-node-certs" (OuterVolumeSpecName: "node-certs") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:39:06.815166 kubelet[2782]: I0430 03:39:06.813205 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.815166 kubelet[2782]: I0430 03:39:06.813217 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" (UID: "bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:39:06.817533 systemd[1]: var-lib-kubelet-pods-bfa7f9ba\x2dcc22\x2d4311\x2d91e3\x2d5a0a4bb7af02-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 03:39:06.817638 systemd[1]: var-lib-kubelet-pods-bfa7f9ba\x2dcc22\x2d4311\x2d91e3\x2d5a0a4bb7af02-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 03:39:06.900329 kubelet[2782]: I0430 03:39:06.900248 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-cni-log-dir\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.900329 kubelet[2782]: I0430 03:39:06.900335 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-tigera-ca-bundle\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.900329 kubelet[2782]: I0430 03:39:06.900368 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-cni-bin-dir\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.901007 kubelet[2782]: I0430 03:39:06.900396 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-lib-modules\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.901007 kubelet[2782]: I0430 03:39:06.900422 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-var-lib-calico\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.901007 kubelet[2782]: I0430 03:39:06.900450 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-flexvol-driver-host\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.901007 kubelet[2782]: I0430 03:39:06.900501 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5q2\" (UniqueName: \"kubernetes.io/projected/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-kube-api-access-5j5q2\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.901007 kubelet[2782]: I0430 03:39:06.900566 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-policysync\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.903554 kubelet[2782]: I0430 03:39:06.900598 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-xtables-lock\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.907927 kubelet[2782]: I0430 03:39:06.900657 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-node-certs\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.908020 kubelet[2782]: I0430 03:39:06.907936 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-var-run-calico\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.908020 kubelet[2782]: I0430 03:39:06.907975 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f8c663e1-1826-4aa1-84f0-7f78cd166b4b-cni-net-dir\") pod \"calico-node-2xvwn\" (UID: \"f8c663e1-1826-4aa1-84f0-7f78cd166b4b\") " pod="calico-system/calico-node-2xvwn" Apr 30 03:39:06.910404 kubelet[2782]: I0430 03:39:06.910364 2782 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-xtables-lock\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910404 kubelet[2782]: I0430 03:39:06.910398 2782 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-var-lib-calico\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910534 kubelet[2782]: I0430 03:39:06.910417 2782 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-flexvol-driver-host\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910534 kubelet[2782]: I0430 03:39:06.910432 2782 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-policysync\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910534 kubelet[2782]: I0430 03:39:06.910446 2782 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-var-run-calico\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910534 kubelet[2782]: I0430 03:39:06.910460 2782 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-log-dir\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910534 kubelet[2782]: I0430 03:39:06.910473 2782 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-net-dir\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910534 kubelet[2782]: I0430 03:39:06.910488 2782 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zldnb\" (UniqueName: \"kubernetes.io/projected/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-kube-api-access-zldnb\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910534 kubelet[2782]: I0430 03:39:06.910504 2782 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-tigera-ca-bundle\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910534 kubelet[2782]: I0430 03:39:06.910518 2782 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-node-certs\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910916 kubelet[2782]: I0430 03:39:06.910534 2782 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-lib-modules\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:06.910916 kubelet[2782]: I0430 03:39:06.910547 2782 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02-cni-bin-dir\") on node \"ci-4081-3-3-9-5ae3ade3a2\" DevicePath \"\"" Apr 30 03:39:07.121451 containerd[1507]: time="2025-04-30T03:39:07.121377511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2xvwn,Uid:f8c663e1-1826-4aa1-84f0-7f78cd166b4b,Namespace:calico-system,Attempt:0,}" Apr 30 03:39:07.157222 containerd[1507]: time="2025-04-30T03:39:07.155687080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:07.157222 containerd[1507]: time="2025-04-30T03:39:07.155786625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:07.157222 containerd[1507]: time="2025-04-30T03:39:07.155808605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:07.157222 containerd[1507]: time="2025-04-30T03:39:07.155941331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:07.181053 systemd[1]: Started cri-containerd-8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8.scope - libcontainer container 8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8. Apr 30 03:39:07.231442 containerd[1507]: time="2025-04-30T03:39:07.231345144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2xvwn,Uid:f8c663e1-1826-4aa1-84f0-7f78cd166b4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8\"" Apr 30 03:39:07.236632 containerd[1507]: time="2025-04-30T03:39:07.236556895Z" level=info msg="CreateContainer within sandbox \"8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:39:07.254969 containerd[1507]: time="2025-04-30T03:39:07.254889469Z" level=info msg="CreateContainer within sandbox \"8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8fe0e7d3e88ef59f3e8001e5dbf62db052f90af079ada3e84d12b628649e3eb9\"" Apr 30 03:39:07.256710 containerd[1507]: time="2025-04-30T03:39:07.255905080Z" level=info msg="StartContainer for \"8fe0e7d3e88ef59f3e8001e5dbf62db052f90af079ada3e84d12b628649e3eb9\"" Apr 30 03:39:07.287822 systemd[1]: Started cri-containerd-8fe0e7d3e88ef59f3e8001e5dbf62db052f90af079ada3e84d12b628649e3eb9.scope - libcontainer container 8fe0e7d3e88ef59f3e8001e5dbf62db052f90af079ada3e84d12b628649e3eb9. Apr 30 03:39:07.328130 containerd[1507]: time="2025-04-30T03:39:07.328089631Z" level=info msg="StartContainer for \"8fe0e7d3e88ef59f3e8001e5dbf62db052f90af079ada3e84d12b628649e3eb9\" returns successfully" Apr 30 03:39:07.334578 kubelet[2782]: I0430 03:39:07.334536 2782 scope.go:117] "RemoveContainer" containerID="2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57" Apr 30 03:39:07.345261 containerd[1507]: time="2025-04-30T03:39:07.345215874Z" level=info msg="RemoveContainer for \"2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57\"" Apr 30 03:39:07.347078 systemd[1]: Removed slice kubepods-besteffort-podbfa7f9ba_cc22_4311_91e3_5a0a4bb7af02.slice - libcontainer container kubepods-besteffort-podbfa7f9ba_cc22_4311_91e3_5a0a4bb7af02.slice. Apr 30 03:39:07.350413 containerd[1507]: time="2025-04-30T03:39:07.350233184Z" level=info msg="RemoveContainer for \"2a4dee00f54bd105834ad0f68d33ec46787ecf3663214c3faa6e2d23d28dbe57\" returns successfully" Apr 30 03:39:07.350507 kubelet[2782]: I0430 03:39:07.350469 2782 scope.go:117] "RemoveContainer" containerID="02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd" Apr 30 03:39:07.351987 containerd[1507]: time="2025-04-30T03:39:07.351733010Z" level=info msg="RemoveContainer for \"02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd\"" Apr 30 03:39:07.356054 containerd[1507]: time="2025-04-30T03:39:07.356019187Z" level=info msg="RemoveContainer for \"02de56d18d5565595efe745938e7d904a7144b5010085a7b6c4b40e97ee9bffd\" returns successfully" Apr 30 03:39:07.356771 kubelet[2782]: I0430 03:39:07.356746 2782 scope.go:117] "RemoveContainer" containerID="aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce" Apr 30 03:39:07.358272 containerd[1507]: time="2025-04-30T03:39:07.358239289Z" level=info msg="RemoveContainer for \"aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce\"" Apr 30 03:39:07.362014 containerd[1507]: time="2025-04-30T03:39:07.361981527Z" level=info msg="RemoveContainer for \"aa07b4db01960fbefec8614b5e03dd6704e3acee8ffb350aa4246a5dbe9bc8ce\" returns successfully" Apr 30 03:39:07.389819 systemd[1]: cri-containerd-8fe0e7d3e88ef59f3e8001e5dbf62db052f90af079ada3e84d12b628649e3eb9.scope: Deactivated successfully. Apr 30 03:39:07.416012 containerd[1507]: time="2025-04-30T03:39:07.415889597Z" level=info msg="shim disconnected" id=8fe0e7d3e88ef59f3e8001e5dbf62db052f90af079ada3e84d12b628649e3eb9 namespace=k8s.io Apr 30 03:39:07.416174 containerd[1507]: time="2025-04-30T03:39:07.416161030Z" level=warning msg="cleaning up after shim disconnected" id=8fe0e7d3e88ef59f3e8001e5dbf62db052f90af079ada3e84d12b628649e3eb9 namespace=k8s.io Apr 30 03:39:07.416215 containerd[1507]: time="2025-04-30T03:39:07.416206314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:39:07.939247 kubelet[2782]: I0430 03:39:07.939193 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02" path="/var/lib/kubelet/pods/bfa7f9ba-cc22-4311-91e3-5a0a4bb7af02/volumes" Apr 30 03:39:08.345750 containerd[1507]: time="2025-04-30T03:39:08.345350002Z" level=info msg="CreateContainer within sandbox \"8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:39:08.368852 containerd[1507]: time="2025-04-30T03:39:08.367736374Z" level=info msg="CreateContainer within sandbox \"8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51\"" Apr 30 03:39:08.370944 containerd[1507]: time="2025-04-30T03:39:08.370703369Z" level=info msg="StartContainer for \"89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51\"" Apr 30 03:39:08.428903 systemd[1]: Started cri-containerd-89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51.scope - libcontainer container 89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51. Apr 30 03:39:08.458049 containerd[1507]: time="2025-04-30T03:39:08.457878511Z" level=info msg="StartContainer for \"89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51\" returns successfully" Apr 30 03:39:08.660447 systemd[1]: run-containerd-runc-k8s.io-89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51-runc.b2GMxx.mount: Deactivated successfully. Apr 30 03:39:08.935632 containerd[1507]: time="2025-04-30T03:39:08.933999478Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:39:08.936499 containerd[1507]: time="2025-04-30T03:39:08.935941675Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:39:08.974720 containerd[1507]: time="2025-04-30T03:39:08.974661566Z" level=error msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" failed" error="failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:08.975184 kubelet[2782]: E0430 03:39:08.975002 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:08.975184 kubelet[2782]: E0430 03:39:08.975047 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098"} Apr 30 03:39:08.975184 kubelet[2782]: E0430 03:39:08.975086 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:08.975184 kubelet[2782]: E0430 03:39:08.975109 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c996608-b929-44c2-8ad9-06eca9de0734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6z8bp" podUID="9c996608-b929-44c2-8ad9-06eca9de0734" Apr 30 03:39:08.983164 containerd[1507]: time="2025-04-30T03:39:08.983019936Z" level=error msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" failed" error="failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:08.983329 kubelet[2782]: E0430 03:39:08.983277 2782 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:08.983393 kubelet[2782]: E0430 03:39:08.983335 2782 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49"} Apr 30 03:39:08.983393 kubelet[2782]: E0430 03:39:08.983370 2782 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:08.983479 kubelet[2782]: E0430 03:39:08.983393 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a0a5d29-6bdf-4990-88c7-d46de9879e7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" podUID="1a0a5d29-6bdf-4990-88c7-d46de9879e7c" Apr 30 03:39:09.244116 systemd[1]: cri-containerd-89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51.scope: Deactivated successfully. Apr 30 03:39:09.270549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51-rootfs.mount: Deactivated successfully. Apr 30 03:39:09.277192 containerd[1507]: time="2025-04-30T03:39:09.277131347Z" level=info msg="shim disconnected" id=89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51 namespace=k8s.io Apr 30 03:39:09.277192 containerd[1507]: time="2025-04-30T03:39:09.277187111Z" level=warning msg="cleaning up after shim disconnected" id=89d514227bacadfe6f43c76a6c89ec67d0ac2609f901d50afdeb5c52e90d1b51 namespace=k8s.io Apr 30 03:39:09.278456 containerd[1507]: time="2025-04-30T03:39:09.277195276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:39:09.378377 containerd[1507]: time="2025-04-30T03:39:09.378232159Z" level=info msg="CreateContainer within sandbox \"8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:39:09.405328 containerd[1507]: time="2025-04-30T03:39:09.405182793Z" level=info msg="CreateContainer within sandbox \"8b4369482ce740221eeefaaffcf3bf7bac54b7e6ce2eacb574c95eb9eb1a1be8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5ab1cdda810fead7ed2fd46b02c26f542fe89868f9bf0b188ab09fa0114fc6d0\"" Apr 30 03:39:09.408874 containerd[1507]: time="2025-04-30T03:39:09.408835730Z" level=info msg="StartContainer for \"5ab1cdda810fead7ed2fd46b02c26f542fe89868f9bf0b188ab09fa0114fc6d0\"" Apr 30 03:39:09.453102 systemd[1]: Started cri-containerd-5ab1cdda810fead7ed2fd46b02c26f542fe89868f9bf0b188ab09fa0114fc6d0.scope - libcontainer container 5ab1cdda810fead7ed2fd46b02c26f542fe89868f9bf0b188ab09fa0114fc6d0. Apr 30 03:39:09.509483 containerd[1507]: time="2025-04-30T03:39:09.508742870Z" level=info msg="StartContainer for \"5ab1cdda810fead7ed2fd46b02c26f542fe89868f9bf0b188ab09fa0114fc6d0\" returns successfully" Apr 30 03:39:09.658222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110823669.mount: Deactivated successfully. Apr 30 03:39:11.178640 kernel: bpftool[4865]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:39:11.452201 systemd-networkd[1402]: vxlan.calico: Link UP Apr 30 03:39:11.452209 systemd-networkd[1402]: vxlan.calico: Gained carrier Apr 30 03:39:11.934873 containerd[1507]: time="2025-04-30T03:39:11.933777296Z" level=info msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" Apr 30 03:39:12.060159 kubelet[2782]: I0430 03:39:12.053057 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2xvwn" podStartSLOduration=6.052963646 podStartE2EDuration="6.052963646s" podCreationTimestamp="2025-04-30 03:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:10.385924275 +0000 UTC m=+90.556455916" watchObservedRunningTime="2025-04-30 03:39:12.052963646 +0000 UTC m=+92.223495328" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.042 [INFO][4971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.042 [INFO][4971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" iface="eth0" netns="/var/run/netns/cni-79878801-023d-f67b-757a-618711b84356" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.044 [INFO][4971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" iface="eth0" netns="/var/run/netns/cni-79878801-023d-f67b-757a-618711b84356" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.044 [INFO][4971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" iface="eth0" netns="/var/run/netns/cni-79878801-023d-f67b-757a-618711b84356" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.044 [INFO][4971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.044 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.082 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.082 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.082 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.091 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.091 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.094 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:12.101708 containerd[1507]: 2025-04-30 03:39:12.098 [INFO][4971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:12.106677 containerd[1507]: time="2025-04-30T03:39:12.103674830Z" level=info msg="TearDown network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" successfully" Apr 30 03:39:12.106677 containerd[1507]: time="2025-04-30T03:39:12.103707270Z" level=info msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" returns successfully" Apr 30 03:39:12.106677 containerd[1507]: time="2025-04-30T03:39:12.104440048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhm6w,Uid:c9c80f55-2e27-4f48-8d62-0a63647aa84e,Namespace:kube-system,Attempt:1,}" Apr 30 03:39:12.107250 systemd[1]: run-netns-cni\x2d79878801\x2d023d\x2df67b\x2d757a\x2d618711b84356.mount: Deactivated successfully. Apr 30 03:39:12.242597 systemd-networkd[1402]: cali340863f89e8: Link UP Apr 30 03:39:12.243807 systemd-networkd[1402]: cali340863f89e8: Gained carrier Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.177 [INFO][4986] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0 coredns-7db6d8ff4d- kube-system c9c80f55-2e27-4f48-8d62-0a63647aa84e 892 0 2025-04-30 03:37:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-9-5ae3ade3a2 coredns-7db6d8ff4d-qhm6w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali340863f89e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhm6w" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.177 [INFO][4986] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhm6w" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.206 [INFO][4997] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" HandleID="k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.214 [INFO][4997] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" HandleID="k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-9-5ae3ade3a2", "pod":"coredns-7db6d8ff4d-qhm6w", "timestamp":"2025-04-30 03:39:12.206394956 +0000 UTC"}, Hostname:"ci-4081-3-3-9-5ae3ade3a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.214 [INFO][4997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.214 [INFO][4997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.214 [INFO][4997] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-5ae3ade3a2' Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.216 [INFO][4997] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.220 [INFO][4997] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.224 [INFO][4997] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.225 [INFO][4997] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.227 [INFO][4997] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.227 [INFO][4997] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.228 [INFO][4997] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868 Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.232 [INFO][4997] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.237 [INFO][4997] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.1/26] block=192.168.61.0/26 handle="k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.237 [INFO][4997] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.1/26] handle="k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.237 [INFO][4997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:12.255572 containerd[1507]: 2025-04-30 03:39:12.237 [INFO][4997] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.1/26] IPv6=[] ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" HandleID="k8s-pod-network.64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.258785 containerd[1507]: 2025-04-30 03:39:12.239 [INFO][4986] cni-plugin/k8s.go 386: Populated endpoint ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhm6w" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c9c80f55-2e27-4f48-8d62-0a63647aa84e", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"", Pod:"coredns-7db6d8ff4d-qhm6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali340863f89e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:12.258785 containerd[1507]: 2025-04-30 03:39:12.239 [INFO][4986] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.1/32] ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhm6w" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.258785 containerd[1507]: 2025-04-30 03:39:12.239 [INFO][4986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali340863f89e8 ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhm6w" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.258785 containerd[1507]: 2025-04-30 03:39:12.241 [INFO][4986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhm6w" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.258785 containerd[1507]: 2025-04-30 03:39:12.241 [INFO][4986] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhm6w" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c9c80f55-2e27-4f48-8d62-0a63647aa84e", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868", Pod:"coredns-7db6d8ff4d-qhm6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali340863f89e8", MAC:"1a:e1:6d:61:3a:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:12.258785 containerd[1507]: 2025-04-30 03:39:12.250 [INFO][4986] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhm6w" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:12.278434 containerd[1507]: time="2025-04-30T03:39:12.278318427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:12.279270 containerd[1507]: time="2025-04-30T03:39:12.278416279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:12.279270 containerd[1507]: time="2025-04-30T03:39:12.278573580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:12.279270 containerd[1507]: time="2025-04-30T03:39:12.279086010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:12.296740 systemd[1]: Started cri-containerd-64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868.scope - libcontainer container 64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868. Apr 30 03:39:12.332857 containerd[1507]: time="2025-04-30T03:39:12.332701638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhm6w,Uid:c9c80f55-2e27-4f48-8d62-0a63647aa84e,Namespace:kube-system,Attempt:1,} returns sandbox id \"64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868\"" Apr 30 03:39:12.335893 containerd[1507]: time="2025-04-30T03:39:12.335740049Z" level=info msg="CreateContainer within sandbox \"64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:39:12.353519 containerd[1507]: time="2025-04-30T03:39:12.353455876Z" level=info msg="CreateContainer within sandbox \"64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"287725f7f8e7553fe4f3942370ae190153ff953ac82001367e93bdf00ac343b1\"" Apr 30 03:39:12.354437 containerd[1507]: time="2025-04-30T03:39:12.354321251Z" level=info msg="StartContainer for \"287725f7f8e7553fe4f3942370ae190153ff953ac82001367e93bdf00ac343b1\"" Apr 30 03:39:12.375721 systemd[1]: Started cri-containerd-287725f7f8e7553fe4f3942370ae190153ff953ac82001367e93bdf00ac343b1.scope - libcontainer container 287725f7f8e7553fe4f3942370ae190153ff953ac82001367e93bdf00ac343b1. Apr 30 03:39:12.406183 containerd[1507]: time="2025-04-30T03:39:12.406146159Z" level=info msg="StartContainer for \"287725f7f8e7553fe4f3942370ae190153ff953ac82001367e93bdf00ac343b1\" returns successfully" Apr 30 03:39:12.933497 containerd[1507]: time="2025-04-30T03:39:12.933018341Z" level=info msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" Apr 30 03:39:13.061285 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.011 [INFO][5111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.013 [INFO][5111] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" iface="eth0" netns="/var/run/netns/cni-6b68d243-c777-1994-607b-7ef2d661dc00" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.014 [INFO][5111] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" iface="eth0" netns="/var/run/netns/cni-6b68d243-c777-1994-607b-7ef2d661dc00" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.014 [INFO][5111] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" iface="eth0" netns="/var/run/netns/cni-6b68d243-c777-1994-607b-7ef2d661dc00" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.015 [INFO][5111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.015 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.055 [INFO][5119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.056 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.056 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.071 [WARNING][5119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.071 [INFO][5119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.074 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:13.077836 containerd[1507]: 2025-04-30 03:39:13.076 [INFO][5111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:13.081319 containerd[1507]: time="2025-04-30T03:39:13.077980470Z" level=info msg="TearDown network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" successfully" Apr 30 03:39:13.081319 containerd[1507]: time="2025-04-30T03:39:13.078024231Z" level=info msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" returns successfully" Apr 30 03:39:13.081319 containerd[1507]: time="2025-04-30T03:39:13.078745618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjjf8,Uid:4d11b5f7-a801-4c03-8af8-692f5d9587bd,Namespace:calico-system,Attempt:1,}" Apr 30 03:39:13.082523 systemd[1]: run-netns-cni\x2d6b68d243\x2dc777\x2d1994\x2d607b\x2d7ef2d661dc00.mount: Deactivated successfully. Apr 30 03:39:13.229019 systemd-networkd[1402]: cali65010f53cf2: Link UP Apr 30 03:39:13.230056 systemd-networkd[1402]: cali65010f53cf2: Gained carrier Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.142 [INFO][5125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0 csi-node-driver- calico-system 4d11b5f7-a801-4c03-8af8-692f5d9587bd 902 0 2025-04-30 03:38:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-9-5ae3ade3a2 csi-node-driver-rjjf8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali65010f53cf2 [] []}} ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Namespace="calico-system" Pod="csi-node-driver-rjjf8" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.143 [INFO][5125] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Namespace="calico-system" Pod="csi-node-driver-rjjf8" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.182 [INFO][5137] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" HandleID="k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.191 [INFO][5137] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" HandleID="k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002655f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-9-5ae3ade3a2", "pod":"csi-node-driver-rjjf8", "timestamp":"2025-04-30 03:39:13.182190654 +0000 UTC"}, Hostname:"ci-4081-3-3-9-5ae3ade3a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.191 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.191 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.191 [INFO][5137] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-5ae3ade3a2' Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.193 [INFO][5137] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.198 [INFO][5137] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.203 [INFO][5137] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.205 [INFO][5137] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.207 [INFO][5137] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.207 [INFO][5137] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.209 [INFO][5137] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.213 [INFO][5137] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.220 [INFO][5137] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.2/26] block=192.168.61.0/26 handle="k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.220 [INFO][5137] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.2/26] handle="k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.220 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:13.249544 containerd[1507]: 2025-04-30 03:39:13.220 [INFO][5137] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.2/26] IPv6=[] ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" HandleID="k8s-pod-network.8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.251352 containerd[1507]: 2025-04-30 03:39:13.223 [INFO][5125] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Namespace="calico-system" Pod="csi-node-driver-rjjf8" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4d11b5f7-a801-4c03-8af8-692f5d9587bd", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"", Pod:"csi-node-driver-rjjf8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65010f53cf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:13.251352 containerd[1507]: 2025-04-30 03:39:13.224 [INFO][5125] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.2/32] ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Namespace="calico-system" Pod="csi-node-driver-rjjf8" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.251352 containerd[1507]: 2025-04-30 03:39:13.224 [INFO][5125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65010f53cf2 ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Namespace="calico-system" Pod="csi-node-driver-rjjf8" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.251352 containerd[1507]: 2025-04-30 03:39:13.230 [INFO][5125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Namespace="calico-system" Pod="csi-node-driver-rjjf8" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.251352 containerd[1507]: 2025-04-30 03:39:13.231 [INFO][5125] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Namespace="calico-system" Pod="csi-node-driver-rjjf8" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4d11b5f7-a801-4c03-8af8-692f5d9587bd", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d", Pod:"csi-node-driver-rjjf8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65010f53cf2", MAC:"c2:e6:1b:9b:98:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:13.251352 containerd[1507]: 2025-04-30 03:39:13.245 [INFO][5125] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d" Namespace="calico-system" Pod="csi-node-driver-rjjf8" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:13.267471 containerd[1507]: time="2025-04-30T03:39:13.267350120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:13.267937 containerd[1507]: time="2025-04-30T03:39:13.267756743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:13.267937 containerd[1507]: time="2025-04-30T03:39:13.267772031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:13.267937 containerd[1507]: time="2025-04-30T03:39:13.267830821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:13.286750 systemd[1]: Started cri-containerd-8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d.scope - libcontainer container 8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d. Apr 30 03:39:13.303089 containerd[1507]: time="2025-04-30T03:39:13.302746260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjjf8,Uid:4d11b5f7-a801-4c03-8af8-692f5d9587bd,Namespace:calico-system,Attempt:1,} returns sandbox id \"8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d\"" Apr 30 03:39:13.304825 containerd[1507]: time="2025-04-30T03:39:13.304805918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:39:13.380179 systemd-networkd[1402]: cali340863f89e8: Gained IPv6LL Apr 30 03:39:13.398114 kubelet[2782]: I0430 03:39:13.397992 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qhm6w" podStartSLOduration=78.397859168 podStartE2EDuration="1m18.397859168s" podCreationTimestamp="2025-04-30 03:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:13.394773508 +0000 UTC m=+93.565305160" watchObservedRunningTime="2025-04-30 03:39:13.397859168 +0000 UTC m=+93.568390830" Apr 30 03:39:13.934138 containerd[1507]: time="2025-04-30T03:39:13.933778260Z" level=info msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.000 [INFO][5216] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.001 [INFO][5216] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" iface="eth0" netns="/var/run/netns/cni-45172753-3048-757b-2a7f-28b15cc5babb" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.001 [INFO][5216] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" iface="eth0" netns="/var/run/netns/cni-45172753-3048-757b-2a7f-28b15cc5babb" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.002 [INFO][5216] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" iface="eth0" netns="/var/run/netns/cni-45172753-3048-757b-2a7f-28b15cc5babb" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.002 [INFO][5216] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.002 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.037 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.037 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.037 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.046 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.046 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.050 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:14.055519 containerd[1507]: 2025-04-30 03:39:14.052 [INFO][5216] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:14.057874 containerd[1507]: time="2025-04-30T03:39:14.057778993Z" level=info msg="TearDown network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" successfully" Apr 30 03:39:14.057874 containerd[1507]: time="2025-04-30T03:39:14.057852328Z" level=info msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" returns successfully" Apr 30 03:39:14.060378 containerd[1507]: time="2025-04-30T03:39:14.060335633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8cd954d7-t67wf,Uid:bba0c868-51a2-4e1d-8831-4113d7d9bdcd,Namespace:calico-system,Attempt:1,}" Apr 30 03:39:14.062943 systemd[1]: run-netns-cni\x2d45172753\x2d3048\x2d757b\x2d2a7f\x2d28b15cc5babb.mount: Deactivated successfully. Apr 30 03:39:14.188955 systemd-networkd[1402]: cali58a5c21bb7b: Link UP Apr 30 03:39:14.189844 systemd-networkd[1402]: cali58a5c21bb7b: Gained carrier Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.127 [INFO][5230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0 calico-kube-controllers-6c8cd954d7- calico-system bba0c868-51a2-4e1d-8831-4113d7d9bdcd 918 0 2025-04-30 03:38:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c8cd954d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-9-5ae3ade3a2 calico-kube-controllers-6c8cd954d7-t67wf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali58a5c21bb7b [] []}} ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd954d7-t67wf" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.127 [INFO][5230] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd954d7-t67wf" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.151 [INFO][5243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" HandleID="k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.160 [INFO][5243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" HandleID="k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-9-5ae3ade3a2", "pod":"calico-kube-controllers-6c8cd954d7-t67wf", "timestamp":"2025-04-30 03:39:14.151252193 +0000 UTC"}, Hostname:"ci-4081-3-3-9-5ae3ade3a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.160 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.160 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.160 [INFO][5243] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-5ae3ade3a2' Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.161 [INFO][5243] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.165 [INFO][5243] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.168 [INFO][5243] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.170 [INFO][5243] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.172 [INFO][5243] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.172 [INFO][5243] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.173 [INFO][5243] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.178 [INFO][5243] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.183 [INFO][5243] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.3/26] block=192.168.61.0/26 handle="k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.183 [INFO][5243] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.3/26] handle="k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.183 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:14.207165 containerd[1507]: 2025-04-30 03:39:14.183 [INFO][5243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.3/26] IPv6=[] ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" HandleID="k8s-pod-network.7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.209087 containerd[1507]: 2025-04-30 03:39:14.186 [INFO][5230] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd954d7-t67wf" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0", GenerateName:"calico-kube-controllers-6c8cd954d7-", Namespace:"calico-system", SelfLink:"", UID:"bba0c868-51a2-4e1d-8831-4113d7d9bdcd", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8cd954d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"", Pod:"calico-kube-controllers-6c8cd954d7-t67wf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58a5c21bb7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:14.209087 containerd[1507]: 2025-04-30 03:39:14.186 [INFO][5230] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.3/32] ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd954d7-t67wf" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.209087 containerd[1507]: 2025-04-30 03:39:14.186 [INFO][5230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58a5c21bb7b ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd954d7-t67wf" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.209087 containerd[1507]: 2025-04-30 03:39:14.188 [INFO][5230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd954d7-t67wf" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.209087 containerd[1507]: 2025-04-30 03:39:14.188 [INFO][5230] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd954d7-t67wf" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0", GenerateName:"calico-kube-controllers-6c8cd954d7-", Namespace:"calico-system", SelfLink:"", UID:"bba0c868-51a2-4e1d-8831-4113d7d9bdcd", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8cd954d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f", Pod:"calico-kube-controllers-6c8cd954d7-t67wf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58a5c21bb7b", MAC:"56:f7:aa:d0:ac:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:14.209087 containerd[1507]: 2025-04-30 03:39:14.203 [INFO][5230] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f" Namespace="calico-system" Pod="calico-kube-controllers-6c8cd954d7-t67wf" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:14.231790 containerd[1507]: time="2025-04-30T03:39:14.231719759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:14.232011 containerd[1507]: time="2025-04-30T03:39:14.231844601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:14.232011 containerd[1507]: time="2025-04-30T03:39:14.231860099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:14.232320 containerd[1507]: time="2025-04-30T03:39:14.232149476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:14.255790 systemd[1]: Started cri-containerd-7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f.scope - libcontainer container 7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f. Apr 30 03:39:14.299032 containerd[1507]: time="2025-04-30T03:39:14.298965980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8cd954d7-t67wf,Uid:bba0c868-51a2-4e1d-8831-4113d7d9bdcd,Namespace:calico-system,Attempt:1,} returns sandbox id \"7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f\"" Apr 30 03:39:14.724208 systemd-networkd[1402]: cali65010f53cf2: Gained IPv6LL Apr 30 03:39:15.235997 systemd-networkd[1402]: cali58a5c21bb7b: Gained IPv6LL Apr 30 03:39:15.518789 containerd[1507]: time="2025-04-30T03:39:15.518733193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:15.520312 containerd[1507]: time="2025-04-30T03:39:15.520262830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:39:15.521988 containerd[1507]: time="2025-04-30T03:39:15.521937677Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:15.524708 containerd[1507]: time="2025-04-30T03:39:15.524682326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:15.525670 containerd[1507]: time="2025-04-30T03:39:15.525441394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.220607193s" Apr 30 03:39:15.525670 containerd[1507]: time="2025-04-30T03:39:15.525476068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:39:15.527006 containerd[1507]: time="2025-04-30T03:39:15.526915558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:39:15.528937 containerd[1507]: time="2025-04-30T03:39:15.528892494Z" level=info msg="CreateContainer within sandbox \"8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:39:15.547719 containerd[1507]: time="2025-04-30T03:39:15.547633741Z" level=info msg="CreateContainer within sandbox \"8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4de21e673fd52004a0dec105f2dd4c019bbe60fb1ef045056dea1f2d3317d742\"" Apr 30 03:39:15.548380 containerd[1507]: time="2025-04-30T03:39:15.548350170Z" level=info msg="StartContainer for \"4de21e673fd52004a0dec105f2dd4c019bbe60fb1ef045056dea1f2d3317d742\"" Apr 30 03:39:15.586737 systemd[1]: Started cri-containerd-4de21e673fd52004a0dec105f2dd4c019bbe60fb1ef045056dea1f2d3317d742.scope - libcontainer container 4de21e673fd52004a0dec105f2dd4c019bbe60fb1ef045056dea1f2d3317d742. Apr 30 03:39:15.613956 containerd[1507]: time="2025-04-30T03:39:15.613900308Z" level=info msg="StartContainer for \"4de21e673fd52004a0dec105f2dd4c019bbe60fb1ef045056dea1f2d3317d742\" returns successfully" Apr 30 03:39:15.934176 containerd[1507]: time="2025-04-30T03:39:15.933592112Z" level=info msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.007 [INFO][5352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.008 [INFO][5352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" iface="eth0" netns="/var/run/netns/cni-a8e83c53-3f6d-1f73-303e-cb1862c2c9f3" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.008 [INFO][5352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" iface="eth0" netns="/var/run/netns/cni-a8e83c53-3f6d-1f73-303e-cb1862c2c9f3" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.008 [INFO][5352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" iface="eth0" netns="/var/run/netns/cni-a8e83c53-3f6d-1f73-303e-cb1862c2c9f3" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.009 [INFO][5352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.009 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.046 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.046 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.046 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.054 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.054 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.056 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:16.061025 containerd[1507]: 2025-04-30 03:39:16.058 [INFO][5352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:16.070492 containerd[1507]: time="2025-04-30T03:39:16.062764003Z" level=info msg="TearDown network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" successfully" Apr 30 03:39:16.070492 containerd[1507]: time="2025-04-30T03:39:16.062806422Z" level=info msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" returns successfully" Apr 30 03:39:16.070492 containerd[1507]: time="2025-04-30T03:39:16.065747357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7977669f-tbr7c,Uid:19f198dd-9e67-40cc-982d-d60b573a979a,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:39:16.067338 systemd[1]: run-netns-cni\x2da8e83c53\x2d3f6d\x2d1f73\x2d303e\x2dcb1862c2c9f3.mount: Deactivated successfully. Apr 30 03:39:16.207596 systemd-networkd[1402]: cali12594af0742: Link UP Apr 30 03:39:16.208823 systemd-networkd[1402]: cali12594af0742: Gained carrier Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.131 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0 calico-apiserver-6d7977669f- calico-apiserver 19f198dd-9e67-40cc-982d-d60b573a979a 931 0 2025-04-30 03:38:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d7977669f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-9-5ae3ade3a2 calico-apiserver-6d7977669f-tbr7c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12594af0742 [] []}} ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-tbr7c" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.132 [INFO][5366] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-tbr7c" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.165 [INFO][5377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" HandleID="k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.174 [INFO][5377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" HandleID="k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-9-5ae3ade3a2", "pod":"calico-apiserver-6d7977669f-tbr7c", "timestamp":"2025-04-30 03:39:16.165110087 +0000 UTC"}, Hostname:"ci-4081-3-3-9-5ae3ade3a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.174 [INFO][5377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.174 [INFO][5377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.174 [INFO][5377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-5ae3ade3a2' Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.176 [INFO][5377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.180 [INFO][5377] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.185 [INFO][5377] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.187 [INFO][5377] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.189 [INFO][5377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.189 [INFO][5377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.191 [INFO][5377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8 Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.196 [INFO][5377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.202 [INFO][5377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.4/26] block=192.168.61.0/26 handle="k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.202 [INFO][5377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.4/26] handle="k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.202 [INFO][5377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:16.225314 containerd[1507]: 2025-04-30 03:39:16.202 [INFO][5377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.4/26] IPv6=[] ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" HandleID="k8s-pod-network.b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.226890 containerd[1507]: 2025-04-30 03:39:16.205 [INFO][5366] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-tbr7c" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0", GenerateName:"calico-apiserver-6d7977669f-", Namespace:"calico-apiserver", SelfLink:"", UID:"19f198dd-9e67-40cc-982d-d60b573a979a", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7977669f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"", Pod:"calico-apiserver-6d7977669f-tbr7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12594af0742", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:16.226890 containerd[1507]: 2025-04-30 03:39:16.205 [INFO][5366] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.4/32] ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-tbr7c" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.226890 containerd[1507]: 2025-04-30 03:39:16.205 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12594af0742 ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-tbr7c" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.226890 containerd[1507]: 2025-04-30 03:39:16.207 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-tbr7c" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.226890 containerd[1507]: 2025-04-30 03:39:16.209 [INFO][5366] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-tbr7c" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0", GenerateName:"calico-apiserver-6d7977669f-", Namespace:"calico-apiserver", SelfLink:"", UID:"19f198dd-9e67-40cc-982d-d60b573a979a", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7977669f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8", Pod:"calico-apiserver-6d7977669f-tbr7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12594af0742", MAC:"52:bd:b6:51:26:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:16.226890 containerd[1507]: 2025-04-30 03:39:16.222 [INFO][5366] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-tbr7c" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:16.250345 containerd[1507]: time="2025-04-30T03:39:16.249654560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:16.250345 containerd[1507]: time="2025-04-30T03:39:16.249711055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:16.250345 containerd[1507]: time="2025-04-30T03:39:16.249723548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:16.250345 containerd[1507]: time="2025-04-30T03:39:16.249790142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:16.269757 systemd[1]: Started cri-containerd-b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8.scope - libcontainer container b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8. Apr 30 03:39:16.301988 containerd[1507]: time="2025-04-30T03:39:16.301878110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7977669f-tbr7c,Uid:19f198dd-9e67-40cc-982d-d60b573a979a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8\"" Apr 30 03:39:18.116388 systemd-networkd[1402]: cali12594af0742: Gained IPv6LL Apr 30 03:39:18.451237 containerd[1507]: time="2025-04-30T03:39:18.451142508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:18.452353 containerd[1507]: time="2025-04-30T03:39:18.452322027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:39:18.453564 containerd[1507]: time="2025-04-30T03:39:18.453529086Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:18.455759 containerd[1507]: time="2025-04-30T03:39:18.455721936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:18.456591 containerd[1507]: time="2025-04-30T03:39:18.456163907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.929220758s" Apr 30 03:39:18.456591 containerd[1507]: time="2025-04-30T03:39:18.456186237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:39:18.457118 containerd[1507]: time="2025-04-30T03:39:18.456895343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:39:18.472255 containerd[1507]: time="2025-04-30T03:39:18.472208017Z" level=info msg="CreateContainer within sandbox \"7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:39:18.486521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641104215.mount: Deactivated successfully. Apr 30 03:39:18.488501 containerd[1507]: time="2025-04-30T03:39:18.488468430Z" level=info msg="CreateContainer within sandbox \"7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8cfcec5b51baa04d57e908769d6a7a2a7ba4d256f24e257b2d52b6fb0bff2a94\"" Apr 30 03:39:18.489090 containerd[1507]: time="2025-04-30T03:39:18.489075526Z" level=info msg="StartContainer for \"8cfcec5b51baa04d57e908769d6a7a2a7ba4d256f24e257b2d52b6fb0bff2a94\"" Apr 30 03:39:18.528750 systemd[1]: Started cri-containerd-8cfcec5b51baa04d57e908769d6a7a2a7ba4d256f24e257b2d52b6fb0bff2a94.scope - libcontainer container 8cfcec5b51baa04d57e908769d6a7a2a7ba4d256f24e257b2d52b6fb0bff2a94. Apr 30 03:39:18.563849 containerd[1507]: time="2025-04-30T03:39:18.563789094Z" level=info msg="StartContainer for \"8cfcec5b51baa04d57e908769d6a7a2a7ba4d256f24e257b2d52b6fb0bff2a94\" returns successfully" Apr 30 03:39:19.555314 kubelet[2782]: I0430 03:39:19.555228 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c8cd954d7-t67wf" podStartSLOduration=69.398878027 podStartE2EDuration="1m13.555207594s" podCreationTimestamp="2025-04-30 03:38:06 +0000 UTC" firstStartedPulling="2025-04-30 03:39:14.300455792 +0000 UTC m=+94.470987434" lastFinishedPulling="2025-04-30 03:39:18.456785359 +0000 UTC m=+98.627317001" observedRunningTime="2025-04-30 03:39:19.452260263 +0000 UTC m=+99.622791956" watchObservedRunningTime="2025-04-30 03:39:19.555207594 +0000 UTC m=+99.725739236" Apr 30 03:39:20.800750 containerd[1507]: time="2025-04-30T03:39:20.800693108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:20.801898 containerd[1507]: time="2025-04-30T03:39:20.801857689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:39:20.803063 containerd[1507]: time="2025-04-30T03:39:20.803018223Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:20.805252 containerd[1507]: time="2025-04-30T03:39:20.805217747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:20.808810 containerd[1507]: time="2025-04-30T03:39:20.808788926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.351874056s" Apr 30 03:39:20.809593 containerd[1507]: time="2025-04-30T03:39:20.808906623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:39:20.809904 containerd[1507]: time="2025-04-30T03:39:20.809750951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:39:20.810703 containerd[1507]: time="2025-04-30T03:39:20.810659387Z" level=info msg="CreateContainer within sandbox \"8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:39:20.883567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515975135.mount: Deactivated successfully. Apr 30 03:39:20.884587 containerd[1507]: time="2025-04-30T03:39:20.884236434Z" level=info msg="CreateContainer within sandbox \"8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a4a7d3db4e7dbb2005a58e21bd923e42ef4a83e309643f4473b44374be4b93d\"" Apr 30 03:39:20.886950 containerd[1507]: time="2025-04-30T03:39:20.884947142Z" level=info msg="StartContainer for \"1a4a7d3db4e7dbb2005a58e21bd923e42ef4a83e309643f4473b44374be4b93d\"" Apr 30 03:39:20.934307 containerd[1507]: time="2025-04-30T03:39:20.933509034Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:39:20.962787 systemd[1]: Started cri-containerd-1a4a7d3db4e7dbb2005a58e21bd923e42ef4a83e309643f4473b44374be4b93d.scope - libcontainer container 1a4a7d3db4e7dbb2005a58e21bd923e42ef4a83e309643f4473b44374be4b93d. Apr 30 03:39:21.000465 containerd[1507]: time="2025-04-30T03:39:21.000421329Z" level=info msg="StartContainer for \"1a4a7d3db4e7dbb2005a58e21bd923e42ef4a83e309643f4473b44374be4b93d\" returns successfully" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:20.997 [INFO][5537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:20.997 [INFO][5537] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" iface="eth0" netns="/var/run/netns/cni-91a9cd29-a57d-335d-d22b-16c41aa07068" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:20.998 [INFO][5537] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" iface="eth0" netns="/var/run/netns/cni-91a9cd29-a57d-335d-d22b-16c41aa07068" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:20.998 [INFO][5537] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" iface="eth0" netns="/var/run/netns/cni-91a9cd29-a57d-335d-d22b-16c41aa07068" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:20.999 [INFO][5537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:20.999 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:21.026 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:21.026 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:21.026 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:21.032 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:21.032 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:21.034 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:21.039181 containerd[1507]: 2025-04-30 03:39:21.036 [INFO][5537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:21.043740 containerd[1507]: time="2025-04-30T03:39:21.039317351Z" level=info msg="TearDown network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" successfully" Apr 30 03:39:21.043740 containerd[1507]: time="2025-04-30T03:39:21.039343610Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" returns successfully" Apr 30 03:39:21.043740 containerd[1507]: time="2025-04-30T03:39:21.042173483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6z8bp,Uid:9c996608-b929-44c2-8ad9-06eca9de0734,Namespace:kube-system,Attempt:1,}" Apr 30 03:39:21.042354 systemd[1]: run-netns-cni\x2d91a9cd29\x2da57d\x2d335d\x2dd22b\x2d16c41aa07068.mount: Deactivated successfully. Apr 30 03:39:21.184814 systemd-networkd[1402]: cali5272b817d3c: Link UP Apr 30 03:39:21.187033 systemd-networkd[1402]: cali5272b817d3c: Gained carrier Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.105 [INFO][5572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0 coredns-7db6d8ff4d- kube-system 9c996608-b929-44c2-8ad9-06eca9de0734 957 0 2025-04-30 03:37:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-9-5ae3ade3a2 coredns-7db6d8ff4d-6z8bp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5272b817d3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z8bp" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.105 [INFO][5572] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z8bp" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.131 [INFO][5585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" HandleID="k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.139 [INFO][5585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" HandleID="k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b240), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-9-5ae3ade3a2", "pod":"coredns-7db6d8ff4d-6z8bp", "timestamp":"2025-04-30 03:39:21.131061768 +0000 UTC"}, Hostname:"ci-4081-3-3-9-5ae3ade3a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.139 [INFO][5585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.140 [INFO][5585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.140 [INFO][5585] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-5ae3ade3a2' Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.143 [INFO][5585] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.147 [INFO][5585] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.154 [INFO][5585] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.156 [INFO][5585] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.158 [INFO][5585] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.158 [INFO][5585] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.160 [INFO][5585] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.166 [INFO][5585] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.174 [INFO][5585] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.5/26] block=192.168.61.0/26 handle="k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.174 [INFO][5585] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.5/26] handle="k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.174 [INFO][5585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:21.211503 containerd[1507]: 2025-04-30 03:39:21.174 [INFO][5585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.5/26] IPv6=[] ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" HandleID="k8s-pod-network.abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.216738 containerd[1507]: 2025-04-30 03:39:21.177 [INFO][5572] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z8bp" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9c996608-b929-44c2-8ad9-06eca9de0734", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"", Pod:"coredns-7db6d8ff4d-6z8bp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5272b817d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:21.216738 containerd[1507]: 2025-04-30 03:39:21.177 [INFO][5572] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.5/32] ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z8bp" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.216738 containerd[1507]: 2025-04-30 03:39:21.177 [INFO][5572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5272b817d3c ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z8bp" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.216738 containerd[1507]: 2025-04-30 03:39:21.188 [INFO][5572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z8bp" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.216738 containerd[1507]: 2025-04-30 03:39:21.189 [INFO][5572] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z8bp" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9c996608-b929-44c2-8ad9-06eca9de0734", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e", Pod:"coredns-7db6d8ff4d-6z8bp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5272b817d3c", MAC:"92:e3:1c:3d:17:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:21.216738 containerd[1507]: 2025-04-30 03:39:21.202 [INFO][5572] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z8bp" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:21.236148 kubelet[2782]: I0430 03:39:21.236024 2782 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:39:21.244513 kubelet[2782]: I0430 03:39:21.243358 2782 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:39:21.248865 containerd[1507]: time="2025-04-30T03:39:21.247235701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:21.248865 containerd[1507]: time="2025-04-30T03:39:21.247305901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:21.248865 containerd[1507]: time="2025-04-30T03:39:21.247318263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:21.249481 containerd[1507]: time="2025-04-30T03:39:21.249021275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:21.268753 systemd[1]: Started cri-containerd-abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e.scope - libcontainer container abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e. Apr 30 03:39:21.323898 containerd[1507]: time="2025-04-30T03:39:21.323473870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6z8bp,Uid:9c996608-b929-44c2-8ad9-06eca9de0734,Namespace:kube-system,Attempt:1,} returns sandbox id \"abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e\"" Apr 30 03:39:21.327430 containerd[1507]: time="2025-04-30T03:39:21.327128695Z" level=info msg="CreateContainer within sandbox \"abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:39:21.339638 containerd[1507]: time="2025-04-30T03:39:21.339579761Z" level=info msg="CreateContainer within sandbox \"abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8bcd3c0ef9142ca7de36466ca499a7535d83b224562acbac70fb11311556baa\"" Apr 30 03:39:21.342015 containerd[1507]: time="2025-04-30T03:39:21.340920821Z" level=info msg="StartContainer for \"b8bcd3c0ef9142ca7de36466ca499a7535d83b224562acbac70fb11311556baa\"" Apr 30 03:39:21.377728 systemd[1]: Started cri-containerd-b8bcd3c0ef9142ca7de36466ca499a7535d83b224562acbac70fb11311556baa.scope - libcontainer container b8bcd3c0ef9142ca7de36466ca499a7535d83b224562acbac70fb11311556baa. Apr 30 03:39:21.399012 containerd[1507]: time="2025-04-30T03:39:21.398947129Z" level=info msg="StartContainer for \"b8bcd3c0ef9142ca7de36466ca499a7535d83b224562acbac70fb11311556baa\" returns successfully" Apr 30 03:39:21.481706 kubelet[2782]: I0430 03:39:21.479220 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6z8bp" podStartSLOduration=86.479198238 podStartE2EDuration="1m26.479198238s" podCreationTimestamp="2025-04-30 03:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:21.451236046 +0000 UTC m=+101.621767738" watchObservedRunningTime="2025-04-30 03:39:21.479198238 +0000 UTC m=+101.649729881" Apr 30 03:39:22.458113 kubelet[2782]: I0430 03:39:22.457987 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rjjf8" podStartSLOduration=68.952825178 podStartE2EDuration="1m16.457960614s" podCreationTimestamp="2025-04-30 03:38:06 +0000 UTC" firstStartedPulling="2025-04-30 03:39:13.304522282 +0000 UTC m=+93.475053924" lastFinishedPulling="2025-04-30 03:39:20.809657718 +0000 UTC m=+100.980189360" observedRunningTime="2025-04-30 03:39:21.479960915 +0000 UTC m=+101.650492557" watchObservedRunningTime="2025-04-30 03:39:22.457960614 +0000 UTC m=+102.628492297" Apr 30 03:39:22.916078 systemd-networkd[1402]: cali5272b817d3c: Gained IPv6LL Apr 30 03:39:23.935099 containerd[1507]: time="2025-04-30T03:39:23.934872907Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:23.990 [INFO][5715] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:23.990 [INFO][5715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" iface="eth0" netns="/var/run/netns/cni-8dca38bd-7a58-6095-e5f8-548ecd4a67a6" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:23.990 [INFO][5715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" iface="eth0" netns="/var/run/netns/cni-8dca38bd-7a58-6095-e5f8-548ecd4a67a6" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:23.991 [INFO][5715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" iface="eth0" netns="/var/run/netns/cni-8dca38bd-7a58-6095-e5f8-548ecd4a67a6" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:23.991 [INFO][5715] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:23.991 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:24.023 [INFO][5723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:24.024 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:24.024 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:24.029 [WARNING][5723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:24.029 [INFO][5723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:24.030 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:24.037918 containerd[1507]: 2025-04-30 03:39:24.035 [INFO][5715] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:24.037918 containerd[1507]: time="2025-04-30T03:39:24.037804759Z" level=info msg="TearDown network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" successfully" Apr 30 03:39:24.037918 containerd[1507]: time="2025-04-30T03:39:24.037828674Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" returns successfully" Apr 30 03:39:24.038762 containerd[1507]: time="2025-04-30T03:39:24.038662412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7977669f-4lt2m,Uid:1a0a5d29-6bdf-4990-88c7-d46de9879e7c,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:39:24.041966 systemd[1]: run-netns-cni\x2d8dca38bd\x2d7a58\x2d6095\x2de5f8\x2d548ecd4a67a6.mount: Deactivated successfully. Apr 30 03:39:24.091427 containerd[1507]: time="2025-04-30T03:39:24.091389080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:24.092846 containerd[1507]: time="2025-04-30T03:39:24.092819456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:39:24.094065 containerd[1507]: time="2025-04-30T03:39:24.094047167Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:24.096572 containerd[1507]: time="2025-04-30T03:39:24.096551338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:24.097921 containerd[1507]: time="2025-04-30T03:39:24.097503647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.287726648s" Apr 30 03:39:24.098012 containerd[1507]: time="2025-04-30T03:39:24.097930960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:39:24.101011 containerd[1507]: time="2025-04-30T03:39:24.100991606Z" level=info msg="CreateContainer within sandbox \"b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:39:24.124900 containerd[1507]: time="2025-04-30T03:39:24.124860913Z" level=info msg="CreateContainer within sandbox \"b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d064c80a83f77348afe5b440b8e740c985395f0639ed6d2b4abea7e0dd3c9c91\"" Apr 30 03:39:24.125707 containerd[1507]: time="2025-04-30T03:39:24.125694170Z" level=info msg="StartContainer for \"d064c80a83f77348afe5b440b8e740c985395f0639ed6d2b4abea7e0dd3c9c91\"" Apr 30 03:39:24.162772 systemd[1]: Started cri-containerd-d064c80a83f77348afe5b440b8e740c985395f0639ed6d2b4abea7e0dd3c9c91.scope - libcontainer container d064c80a83f77348afe5b440b8e740c985395f0639ed6d2b4abea7e0dd3c9c91. Apr 30 03:39:24.188761 systemd-networkd[1402]: cali777137177f8: Link UP Apr 30 03:39:24.188929 systemd-networkd[1402]: cali777137177f8: Gained carrier Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.095 [INFO][5729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0 calico-apiserver-6d7977669f- calico-apiserver 1a0a5d29-6bdf-4990-88c7-d46de9879e7c 985 0 2025-04-30 03:38:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d7977669f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-9-5ae3ade3a2 calico-apiserver-6d7977669f-4lt2m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali777137177f8 [] []}} ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-4lt2m" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.095 [INFO][5729] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-4lt2m" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.129 [INFO][5747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" HandleID="k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.145 [INFO][5747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" HandleID="k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-9-5ae3ade3a2", "pod":"calico-apiserver-6d7977669f-4lt2m", "timestamp":"2025-04-30 03:39:24.129162863 +0000 UTC"}, Hostname:"ci-4081-3-3-9-5ae3ade3a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.145 [INFO][5747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.145 [INFO][5747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.145 [INFO][5747] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-5ae3ade3a2' Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.151 [INFO][5747] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.157 [INFO][5747] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.162 [INFO][5747] ipam/ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.165 [INFO][5747] ipam/ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.169 [INFO][5747] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.169 [INFO][5747] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.170 [INFO][5747] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52 Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.175 [INFO][5747] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.182 [INFO][5747] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.61.6/26] block=192.168.61.0/26 handle="k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.183 [INFO][5747] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.6/26] handle="k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" host="ci-4081-3-3-9-5ae3ade3a2" Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.183 [INFO][5747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:24.210979 containerd[1507]: 2025-04-30 03:39:24.183 [INFO][5747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.6/26] IPv6=[] ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" HandleID="k8s-pod-network.a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.212090 containerd[1507]: 2025-04-30 03:39:24.185 [INFO][5729] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-4lt2m" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0", GenerateName:"calico-apiserver-6d7977669f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a0a5d29-6bdf-4990-88c7-d46de9879e7c", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7977669f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"", Pod:"calico-apiserver-6d7977669f-4lt2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777137177f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:24.212090 containerd[1507]: 2025-04-30 03:39:24.186 [INFO][5729] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.61.6/32] ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-4lt2m" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.212090 containerd[1507]: 2025-04-30 03:39:24.186 [INFO][5729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali777137177f8 ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-4lt2m" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.212090 containerd[1507]: 2025-04-30 03:39:24.189 [INFO][5729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-4lt2m" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.212090 containerd[1507]: 2025-04-30 03:39:24.189 [INFO][5729] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-4lt2m" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0", GenerateName:"calico-apiserver-6d7977669f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a0a5d29-6bdf-4990-88c7-d46de9879e7c", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7977669f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52", Pod:"calico-apiserver-6d7977669f-4lt2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777137177f8", MAC:"ae:54:b6:1b:00:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:24.212090 containerd[1507]: 2025-04-30 03:39:24.206 [INFO][5729] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52" Namespace="calico-apiserver" Pod="calico-apiserver-6d7977669f-4lt2m" WorkloadEndpoint="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:24.236943 containerd[1507]: time="2025-04-30T03:39:24.236746615Z" level=info msg="StartContainer for \"d064c80a83f77348afe5b440b8e740c985395f0639ed6d2b4abea7e0dd3c9c91\" returns successfully" Apr 30 03:39:24.246528 containerd[1507]: time="2025-04-30T03:39:24.245356137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:24.246528 containerd[1507]: time="2025-04-30T03:39:24.245501827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:24.246528 containerd[1507]: time="2025-04-30T03:39:24.245575464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:24.246528 containerd[1507]: time="2025-04-30T03:39:24.245844444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:24.271339 systemd[1]: Started cri-containerd-a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52.scope - libcontainer container a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52. Apr 30 03:39:24.316889 containerd[1507]: time="2025-04-30T03:39:24.316855474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7977669f-4lt2m,Uid:1a0a5d29-6bdf-4990-88c7-d46de9879e7c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52\"" Apr 30 03:39:24.320878 containerd[1507]: time="2025-04-30T03:39:24.320840075Z" level=info msg="CreateContainer within sandbox \"a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:39:24.332402 containerd[1507]: time="2025-04-30T03:39:24.332352809Z" level=info msg="CreateContainer within sandbox \"a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9a8d46d02b8ee117d9a4e78ea757305a450dc679168006a158aa45a4b0cf09bd\"" Apr 30 03:39:24.333018 containerd[1507]: time="2025-04-30T03:39:24.332953786Z" level=info msg="StartContainer for \"9a8d46d02b8ee117d9a4e78ea757305a450dc679168006a158aa45a4b0cf09bd\"" Apr 30 03:39:24.361194 systemd[1]: Started cri-containerd-9a8d46d02b8ee117d9a4e78ea757305a450dc679168006a158aa45a4b0cf09bd.scope - libcontainer container 9a8d46d02b8ee117d9a4e78ea757305a450dc679168006a158aa45a4b0cf09bd. Apr 30 03:39:24.400253 containerd[1507]: time="2025-04-30T03:39:24.400217495Z" level=info msg="StartContainer for \"9a8d46d02b8ee117d9a4e78ea757305a450dc679168006a158aa45a4b0cf09bd\" returns successfully" Apr 30 03:39:24.521560 kubelet[2782]: I0430 03:39:24.521503 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d7977669f-tbr7c" podStartSLOduration=70.74486119 podStartE2EDuration="1m18.521481567s" podCreationTimestamp="2025-04-30 03:38:06 +0000 UTC" firstStartedPulling="2025-04-30 03:39:16.322287539 +0000 UTC m=+96.492819181" lastFinishedPulling="2025-04-30 03:39:24.098907915 +0000 UTC m=+104.269439558" observedRunningTime="2025-04-30 03:39:24.504538387 +0000 UTC m=+104.675070019" watchObservedRunningTime="2025-04-30 03:39:24.521481567 +0000 UTC m=+104.692013210" Apr 30 03:39:25.169320 kubelet[2782]: I0430 03:39:25.168354 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d7977669f-4lt2m" podStartSLOduration=79.168331744 podStartE2EDuration="1m19.168331744s" podCreationTimestamp="2025-04-30 03:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:24.521979323 +0000 UTC m=+104.692510965" watchObservedRunningTime="2025-04-30 03:39:25.168331744 +0000 UTC m=+105.338863385" Apr 30 03:39:25.731835 systemd-networkd[1402]: cali777137177f8: Gained IPv6LL Apr 30 03:39:29.236806 systemd[1]: run-containerd-runc-k8s.io-8cfcec5b51baa04d57e908769d6a7a2a7ba4d256f24e257b2d52b6fb0bff2a94-runc.AUhV67.mount: Deactivated successfully. Apr 30 03:39:39.971403 containerd[1507]: time="2025-04-30T03:39:39.971365532Z" level=info msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.030 [WARNING][5961] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0", GenerateName:"calico-kube-controllers-6c8cd954d7-", Namespace:"calico-system", SelfLink:"", UID:"bba0c868-51a2-4e1d-8831-4113d7d9bdcd", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8cd954d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f", Pod:"calico-kube-controllers-6c8cd954d7-t67wf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58a5c21bb7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.031 [INFO][5961] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.031 [INFO][5961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" iface="eth0" netns="" Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.031 [INFO][5961] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.031 [INFO][5961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.053 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.053 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.053 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.060 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.060 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.061 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.066784 containerd[1507]: 2025-04-30 03:39:40.064 [INFO][5961] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:40.067939 containerd[1507]: time="2025-04-30T03:39:40.066816794Z" level=info msg="TearDown network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" successfully" Apr 30 03:39:40.067939 containerd[1507]: time="2025-04-30T03:39:40.066841280Z" level=info msg="StopPodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" returns successfully" Apr 30 03:39:40.134690 containerd[1507]: time="2025-04-30T03:39:40.134632084Z" level=info msg="RemovePodSandbox for \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" Apr 30 03:39:40.137247 containerd[1507]: time="2025-04-30T03:39:40.137210153Z" level=info msg="Forcibly stopping sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\"" Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.179 [WARNING][5986] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0", GenerateName:"calico-kube-controllers-6c8cd954d7-", Namespace:"calico-system", SelfLink:"", UID:"bba0c868-51a2-4e1d-8831-4113d7d9bdcd", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8cd954d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"7f9399392f5089da739c2ad63039b054192934ab5e7719ae46965cc4c021250f", Pod:"calico-kube-controllers-6c8cd954d7-t67wf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58a5c21bb7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.180 [INFO][5986] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.180 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" iface="eth0" netns="" Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.180 [INFO][5986] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.180 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.210 [INFO][5993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.210 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.210 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.220 [WARNING][5993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.220 [INFO][5993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" HandleID="k8s-pod-network.ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--kube--controllers--6c8cd954d7--t67wf-eth0" Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.222 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.228590 containerd[1507]: 2025-04-30 03:39:40.225 [INFO][5986] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3" Apr 30 03:39:40.229862 containerd[1507]: time="2025-04-30T03:39:40.229331042Z" level=info msg="TearDown network for sandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" successfully" Apr 30 03:39:40.258212 containerd[1507]: time="2025-04-30T03:39:40.258120908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:39:40.258542 containerd[1507]: time="2025-04-30T03:39:40.258280325Z" level=info msg="RemovePodSandbox \"ecf6ecdd5e55367a82aaec87d79ca8a216774de24c0ed68399fc36b1d1ebbcb3\" returns successfully" Apr 30 03:39:40.274647 containerd[1507]: time="2025-04-30T03:39:40.274603915Z" level=info msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.314 [WARNING][6013] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c9c80f55-2e27-4f48-8d62-0a63647aa84e", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868", Pod:"coredns-7db6d8ff4d-qhm6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali340863f89e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.314 [INFO][6013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.314 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" iface="eth0" netns="" Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.314 [INFO][6013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.314 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.334 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.334 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.334 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.340 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.340 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.342 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.346476 containerd[1507]: 2025-04-30 03:39:40.344 [INFO][6013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:40.347770 containerd[1507]: time="2025-04-30T03:39:40.346520759Z" level=info msg="TearDown network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" successfully" Apr 30 03:39:40.347770 containerd[1507]: time="2025-04-30T03:39:40.346554553Z" level=info msg="StopPodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" returns successfully" Apr 30 03:39:40.347770 containerd[1507]: time="2025-04-30T03:39:40.347068269Z" level=info msg="RemovePodSandbox for \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" Apr 30 03:39:40.347770 containerd[1507]: time="2025-04-30T03:39:40.347097073Z" level=info msg="Forcibly stopping sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\"" Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.386 [WARNING][6039] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c9c80f55-2e27-4f48-8d62-0a63647aa84e", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"64ff1d40b4a6e853c7ac2f6f46f69a9c041fe8237d1ecbbadbe463270c002868", Pod:"coredns-7db6d8ff4d-qhm6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali340863f89e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.386 [INFO][6039] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.386 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" iface="eth0" netns="" Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.386 [INFO][6039] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.386 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.405 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.406 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.406 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.411 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.412 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" HandleID="k8s-pod-network.89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--qhm6w-eth0" Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.413 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.417688 containerd[1507]: 2025-04-30 03:39:40.416 [INFO][6039] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc" Apr 30 03:39:40.418106 containerd[1507]: time="2025-04-30T03:39:40.417738082Z" level=info msg="TearDown network for sandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" successfully" Apr 30 03:39:40.422706 containerd[1507]: time="2025-04-30T03:39:40.422585667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:39:40.422842 containerd[1507]: time="2025-04-30T03:39:40.422719065Z" level=info msg="RemovePodSandbox \"89bfeeb9abe44d2aa5e44b7eb962c6f0881574031a65b68bc5792fb9a74b2ecc\" returns successfully" Apr 30 03:39:40.423324 containerd[1507]: time="2025-04-30T03:39:40.423293695Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.467 [WARNING][6064] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9c996608-b929-44c2-8ad9-06eca9de0734", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e", Pod:"coredns-7db6d8ff4d-6z8bp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5272b817d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.467 [INFO][6064] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.467 [INFO][6064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" iface="eth0" netns="" Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.467 [INFO][6064] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.467 [INFO][6064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.488 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.488 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.488 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.495 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.495 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.496 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.501419 containerd[1507]: 2025-04-30 03:39:40.499 [INFO][6064] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:40.502114 containerd[1507]: time="2025-04-30T03:39:40.501482637Z" level=info msg="TearDown network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" successfully" Apr 30 03:39:40.502114 containerd[1507]: time="2025-04-30T03:39:40.501514857Z" level=info msg="StopPodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" returns successfully" Apr 30 03:39:40.502224 containerd[1507]: time="2025-04-30T03:39:40.502190514Z" level=info msg="RemovePodSandbox for \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:39:40.502297 containerd[1507]: time="2025-04-30T03:39:40.502222444Z" level=info msg="Forcibly stopping sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\"" Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.538 [WARNING][6091] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9c996608-b929-44c2-8ad9-06eca9de0734", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"abf6afeae3ba54bdda20283b3eaa937795e6ebc4fcaeaba704f884a03699f70e", Pod:"coredns-7db6d8ff4d-6z8bp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5272b817d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.538 [INFO][6091] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.538 [INFO][6091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" iface="eth0" netns="" Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.538 [INFO][6091] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.538 [INFO][6091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.557 [INFO][6098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.558 [INFO][6098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.558 [INFO][6098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.563 [WARNING][6098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.563 [INFO][6098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" HandleID="k8s-pod-network.c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-coredns--7db6d8ff4d--6z8bp-eth0" Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.564 [INFO][6098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.568750 containerd[1507]: 2025-04-30 03:39:40.567 [INFO][6091] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098" Apr 30 03:39:40.570286 containerd[1507]: time="2025-04-30T03:39:40.568813124Z" level=info msg="TearDown network for sandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" successfully" Apr 30 03:39:40.578224 containerd[1507]: time="2025-04-30T03:39:40.578175182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:39:40.578224 containerd[1507]: time="2025-04-30T03:39:40.578234062Z" level=info msg="RemovePodSandbox \"c6be4ec92e40f00153a658cec5b4512b23eaf6cfe76a4306a441601979eda098\" returns successfully" Apr 30 03:39:40.578770 containerd[1507]: time="2025-04-30T03:39:40.578725046Z" level=info msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.631 [WARNING][6116] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4d11b5f7-a801-4c03-8af8-692f5d9587bd", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d", Pod:"csi-node-driver-rjjf8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65010f53cf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.631 [INFO][6116] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.632 [INFO][6116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" iface="eth0" netns="" Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.632 [INFO][6116] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.632 [INFO][6116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.661 [INFO][6124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.661 [INFO][6124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.661 [INFO][6124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.669 [WARNING][6124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.669 [INFO][6124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.672 [INFO][6124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.677225 containerd[1507]: 2025-04-30 03:39:40.675 [INFO][6116] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:40.677225 containerd[1507]: time="2025-04-30T03:39:40.677199953Z" level=info msg="TearDown network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" successfully" Apr 30 03:39:40.677225 containerd[1507]: time="2025-04-30T03:39:40.677228767Z" level=info msg="StopPodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" returns successfully" Apr 30 03:39:40.679224 containerd[1507]: time="2025-04-30T03:39:40.677730531Z" level=info msg="RemovePodSandbox for \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" Apr 30 03:39:40.679224 containerd[1507]: time="2025-04-30T03:39:40.677751451Z" level=info msg="Forcibly stopping sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\"" Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.716 [WARNING][6142] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4d11b5f7-a801-4c03-8af8-692f5d9587bd", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"8366e1ae7c6fd5b310fd6a26354e152fcc53220a24c253a344e25eb0a2b5374d", Pod:"csi-node-driver-rjjf8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65010f53cf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.716 [INFO][6142] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.716 [INFO][6142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" iface="eth0" netns="" Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.716 [INFO][6142] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.716 [INFO][6142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.735 [INFO][6150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.735 [INFO][6150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.735 [INFO][6150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.740 [WARNING][6150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.740 [INFO][6150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" HandleID="k8s-pod-network.eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-csi--node--driver--rjjf8-eth0" Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.741 [INFO][6150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.745950 containerd[1507]: 2025-04-30 03:39:40.743 [INFO][6142] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0" Apr 30 03:39:40.747226 containerd[1507]: time="2025-04-30T03:39:40.746019803Z" level=info msg="TearDown network for sandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" successfully" Apr 30 03:39:40.765175 containerd[1507]: time="2025-04-30T03:39:40.764208724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:39:40.765175 containerd[1507]: time="2025-04-30T03:39:40.764296067Z" level=info msg="RemovePodSandbox \"eac7b4645f34881791becfe30dede38e97b0cc5e6c10c0ad8a6bf1d9035531a0\" returns successfully" Apr 30 03:39:40.765175 containerd[1507]: time="2025-04-30T03:39:40.764764830Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.798 [WARNING][6169] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0", GenerateName:"calico-apiserver-6d7977669f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a0a5d29-6bdf-4990-88c7-d46de9879e7c", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7977669f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52", Pod:"calico-apiserver-6d7977669f-4lt2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777137177f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.798 [INFO][6169] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.798 [INFO][6169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" iface="eth0" netns="" Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.798 [INFO][6169] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.798 [INFO][6169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.826 [INFO][6176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.827 [INFO][6176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.827 [INFO][6176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.832 [WARNING][6176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.832 [INFO][6176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.833 [INFO][6176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.840858 containerd[1507]: 2025-04-30 03:39:40.836 [INFO][6169] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:40.840858 containerd[1507]: time="2025-04-30T03:39:40.840733117Z" level=info msg="TearDown network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" successfully" Apr 30 03:39:40.840858 containerd[1507]: time="2025-04-30T03:39:40.840757432Z" level=info msg="StopPodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" returns successfully" Apr 30 03:39:40.841706 containerd[1507]: time="2025-04-30T03:39:40.841664822Z" level=info msg="RemovePodSandbox for \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:39:40.841706 containerd[1507]: time="2025-04-30T03:39:40.841692354Z" level=info msg="Forcibly stopping sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\"" Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.877 [WARNING][6194] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0", GenerateName:"calico-apiserver-6d7977669f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a0a5d29-6bdf-4990-88c7-d46de9879e7c", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7977669f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"a4167e04b0f2dba05140745b7790c51292bebab8fa3b0056559fc7930b0f8c52", Pod:"calico-apiserver-6d7977669f-4lt2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777137177f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.878 [INFO][6194] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.878 [INFO][6194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" iface="eth0" netns="" Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.878 [INFO][6194] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.878 [INFO][6194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.897 [INFO][6201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.898 [INFO][6201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.898 [INFO][6201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.907 [WARNING][6201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.907 [INFO][6201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" HandleID="k8s-pod-network.ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--4lt2m-eth0" Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.909 [INFO][6201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:40.912811 containerd[1507]: 2025-04-30 03:39:40.911 [INFO][6194] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49" Apr 30 03:39:40.914015 containerd[1507]: time="2025-04-30T03:39:40.912845106Z" level=info msg="TearDown network for sandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" successfully" Apr 30 03:39:40.918221 containerd[1507]: time="2025-04-30T03:39:40.918183454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:39:40.918309 containerd[1507]: time="2025-04-30T03:39:40.918239839Z" level=info msg="RemovePodSandbox \"ab69e0fbffeea8ea775d7f3ec8e8390c137918586a645798fe624bc72b27bf49\" returns successfully" Apr 30 03:39:40.918817 containerd[1507]: time="2025-04-30T03:39:40.918691629Z" level=info msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" Apr 30 03:39:41.025390 systemd[1]: Started sshd@7-135.181.100.111:22-8.210.238.247:33784.service - OpenSSH per-connection server daemon (8.210.238.247:33784). Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:40.961 [WARNING][6219] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0", GenerateName:"calico-apiserver-6d7977669f-", Namespace:"calico-apiserver", SelfLink:"", UID:"19f198dd-9e67-40cc-982d-d60b573a979a", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7977669f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8", Pod:"calico-apiserver-6d7977669f-tbr7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12594af0742", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:40.961 [INFO][6219] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:40.961 [INFO][6219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" iface="eth0" netns="" Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:40.961 [INFO][6219] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:40.961 [INFO][6219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:40.996 [INFO][6226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:40.996 [INFO][6226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:40.996 [INFO][6226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:41.008 [WARNING][6226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:41.008 [INFO][6226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:41.011 [INFO][6226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:41.028283 containerd[1507]: 2025-04-30 03:39:41.021 [INFO][6219] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:41.028838 containerd[1507]: time="2025-04-30T03:39:41.028321069Z" level=info msg="TearDown network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" successfully" Apr 30 03:39:41.028838 containerd[1507]: time="2025-04-30T03:39:41.028346126Z" level=info msg="StopPodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" returns successfully" Apr 30 03:39:41.029435 containerd[1507]: time="2025-04-30T03:39:41.029408525Z" level=info msg="RemovePodSandbox for \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" Apr 30 03:39:41.029476 containerd[1507]: time="2025-04-30T03:39:41.029435263Z" level=info msg="Forcibly stopping sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\"" Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.097 [WARNING][6244] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0", GenerateName:"calico-apiserver-6d7977669f-", Namespace:"calico-apiserver", SelfLink:"", UID:"19f198dd-9e67-40cc-982d-d60b573a979a", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7977669f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-5ae3ade3a2", ContainerID:"b7d20cc5b86e34f0a70edb1531caa941ce591ef028d9ba24ddd5fbb0900c26b8", Pod:"calico-apiserver-6d7977669f-tbr7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12594af0742", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.097 [INFO][6244] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.097 [INFO][6244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" iface="eth0" netns="" Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.097 [INFO][6244] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.097 [INFO][6244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.124 [INFO][6252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.124 [INFO][6252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.124 [INFO][6252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.128 [WARNING][6252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.128 [INFO][6252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" HandleID="k8s-pod-network.604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Workload="ci--4081--3--3--9--5ae3ade3a2-k8s-calico--apiserver--6d7977669f--tbr7c-eth0" Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.129 [INFO][6252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:41.133809 containerd[1507]: 2025-04-30 03:39:41.131 [INFO][6244] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea" Apr 30 03:39:41.134334 containerd[1507]: time="2025-04-30T03:39:41.133850083Z" level=info msg="TearDown network for sandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" successfully" Apr 30 03:39:41.146380 containerd[1507]: time="2025-04-30T03:39:41.146307478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:39:41.146522 containerd[1507]: time="2025-04-30T03:39:41.146417973Z" level=info msg="RemovePodSandbox \"604f6011a7522b5525a864245c4e4b5fe31e2ef75b5bc5149de0930832dde0ea\" returns successfully" Apr 30 03:39:41.147214 containerd[1507]: time="2025-04-30T03:39:41.147035373Z" level=info msg="StopPodSandbox for \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\"" Apr 30 03:39:41.147214 containerd[1507]: time="2025-04-30T03:39:41.147128636Z" level=info msg="TearDown network for sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" successfully" Apr 30 03:39:41.147214 containerd[1507]: time="2025-04-30T03:39:41.147141490Z" level=info msg="StopPodSandbox for \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" returns successfully" Apr 30 03:39:41.147544 containerd[1507]: time="2025-04-30T03:39:41.147447630Z" level=info msg="RemovePodSandbox for \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\"" Apr 30 03:39:41.147544 containerd[1507]: time="2025-04-30T03:39:41.147463099Z" level=info msg="Forcibly stopping sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\"" Apr 30 03:39:41.147544 containerd[1507]: time="2025-04-30T03:39:41.147495740Z" level=info msg="TearDown network for sandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" successfully" Apr 30 03:39:41.155003 containerd[1507]: time="2025-04-30T03:39:41.153860241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:39:41.155003 containerd[1507]: time="2025-04-30T03:39:41.153909332Z" level=info msg="RemovePodSandbox \"8f99c4e1e4c341b83a730c62d30a78879284d8399bee8d2e7c82f5f17fae2bf4\" returns successfully" Apr 30 03:39:42.126510 sshd[6232]: Invalid user from 8.210.238.247 port 33784 Apr 30 03:39:48.982296 sshd[6232]: Connection closed by invalid user 8.210.238.247 port 33784 [preauth] Apr 30 03:39:48.984926 systemd[1]: sshd@7-135.181.100.111:22-8.210.238.247:33784.service: Deactivated successfully. Apr 30 03:41:37.160244 systemd[1]: run-containerd-runc-k8s.io-5ab1cdda810fead7ed2fd46b02c26f542fe89868f9bf0b188ab09fa0114fc6d0-runc.nrRGvd.mount: Deactivated successfully. Apr 30 03:41:49.298347 systemd[1]: run-containerd-runc-k8s.io-8cfcec5b51baa04d57e908769d6a7a2a7ba4d256f24e257b2d52b6fb0bff2a94-runc.NNoT4F.mount: Deactivated successfully. Apr 30 03:42:20.792718 systemd[1]: Started sshd@8-135.181.100.111:22-139.178.68.195:40614.service - OpenSSH per-connection server daemon (139.178.68.195:40614). Apr 30 03:42:21.824673 sshd[6600]: Accepted publickey for core from 139.178.68.195 port 40614 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:21.833280 sshd[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:21.846492 systemd-logind[1483]: New session 8 of user core. Apr 30 03:42:21.850866 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:42:23.301506 sshd[6600]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:23.318968 systemd[1]: sshd@8-135.181.100.111:22-139.178.68.195:40614.service: Deactivated successfully. Apr 30 03:42:23.323195 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:42:23.325513 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:42:23.328428 systemd-logind[1483]: Removed session 8. Apr 30 03:42:28.474121 systemd[1]: Started sshd@9-135.181.100.111:22-139.178.68.195:59542.service - OpenSSH per-connection server daemon (139.178.68.195:59542). Apr 30 03:42:29.187245 systemd[1]: run-containerd-runc-k8s.io-8cfcec5b51baa04d57e908769d6a7a2a7ba4d256f24e257b2d52b6fb0bff2a94-runc.RMnSF0.mount: Deactivated successfully. Apr 30 03:42:29.473402 sshd[6630]: Accepted publickey for core from 139.178.68.195 port 59542 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:29.475183 sshd[6630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:29.482005 systemd-logind[1483]: New session 9 of user core. Apr 30 03:42:29.489866 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:42:30.264265 sshd[6630]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:30.270703 systemd[1]: sshd@9-135.181.100.111:22-139.178.68.195:59542.service: Deactivated successfully. Apr 30 03:42:30.273320 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:42:30.276103 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:42:30.277986 systemd-logind[1483]: Removed session 9. Apr 30 03:42:35.438211 systemd[1]: Started sshd@10-135.181.100.111:22-139.178.68.195:53844.service - OpenSSH per-connection server daemon (139.178.68.195:53844). Apr 30 03:42:36.427939 sshd[6663]: Accepted publickey for core from 139.178.68.195 port 53844 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:36.430635 sshd[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:36.439004 systemd-logind[1483]: New session 10 of user core. Apr 30 03:42:36.444862 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:42:37.154517 systemd[1]: run-containerd-runc-k8s.io-5ab1cdda810fead7ed2fd46b02c26f542fe89868f9bf0b188ab09fa0114fc6d0-runc.G03q7K.mount: Deactivated successfully. Apr 30 03:42:37.210569 sshd[6663]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:37.214184 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:42:37.214693 systemd[1]: sshd@10-135.181.100.111:22-139.178.68.195:53844.service: Deactivated successfully. Apr 30 03:42:37.216985 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:42:37.218687 systemd-logind[1483]: Removed session 10. Apr 30 03:42:37.375250 systemd[1]: Started sshd@11-135.181.100.111:22-139.178.68.195:53848.service - OpenSSH per-connection server daemon (139.178.68.195:53848). Apr 30 03:42:38.380386 sshd[6700]: Accepted publickey for core from 139.178.68.195 port 53848 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:38.382188 sshd[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:38.388112 systemd-logind[1483]: New session 11 of user core. Apr 30 03:42:38.391825 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:42:39.242477 sshd[6700]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:39.248310 systemd[1]: sshd@11-135.181.100.111:22-139.178.68.195:53848.service: Deactivated successfully. Apr 30 03:42:39.251288 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:42:39.252644 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:42:39.254462 systemd-logind[1483]: Removed session 11. Apr 30 03:42:39.418278 systemd[1]: Started sshd@12-135.181.100.111:22-139.178.68.195:53858.service - OpenSSH per-connection server daemon (139.178.68.195:53858). Apr 30 03:42:40.403625 sshd[6711]: Accepted publickey for core from 139.178.68.195 port 53858 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:40.404344 sshd[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:40.411443 systemd-logind[1483]: New session 12 of user core. Apr 30 03:42:40.414073 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:42:41.159745 sshd[6711]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:41.171182 systemd[1]: sshd@12-135.181.100.111:22-139.178.68.195:53858.service: Deactivated successfully. Apr 30 03:42:41.175360 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:42:41.176755 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:42:41.178352 systemd-logind[1483]: Removed session 12. Apr 30 03:42:46.336202 systemd[1]: Started sshd@13-135.181.100.111:22-139.178.68.195:34554.service - OpenSSH per-connection server daemon (139.178.68.195:34554). Apr 30 03:42:47.337675 sshd[6733]: Accepted publickey for core from 139.178.68.195 port 34554 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:47.340445 sshd[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:47.351437 systemd-logind[1483]: New session 13 of user core. Apr 30 03:42:47.356077 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:42:47.401585 update_engine[1484]: I20250430 03:42:47.401419 1484 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 03:42:47.401585 update_engine[1484]: I20250430 03:42:47.401501 1484 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 03:42:47.404606 update_engine[1484]: I20250430 03:42:47.404557 1484 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 03:42:47.405440 update_engine[1484]: I20250430 03:42:47.405395 1484 omaha_request_params.cc:62] Current group set to lts Apr 30 03:42:47.405598 update_engine[1484]: I20250430 03:42:47.405568 1484 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 03:42:47.406497 update_engine[1484]: I20250430 03:42:47.405709 1484 update_attempter.cc:643] Scheduling an action processor start. Apr 30 03:42:47.406497 update_engine[1484]: I20250430 03:42:47.405812 1484 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:42:47.406497 update_engine[1484]: I20250430 03:42:47.405872 1484 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 03:42:47.406497 update_engine[1484]: I20250430 03:42:47.405963 1484 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:42:47.406497 update_engine[1484]: I20250430 03:42:47.405977 1484 omaha_request_action.cc:272] Request: Apr 30 03:42:47.406497 update_engine[1484]: Apr 30 03:42:47.406497 update_engine[1484]: Apr 30 03:42:47.406497 update_engine[1484]: Apr 30 03:42:47.406497 update_engine[1484]: Apr 30 03:42:47.406497 update_engine[1484]: Apr 30 03:42:47.406497 update_engine[1484]: Apr 30 03:42:47.406497 update_engine[1484]: Apr 30 03:42:47.406497 update_engine[1484]: Apr 30 03:42:47.406497 update_engine[1484]: I20250430 03:42:47.405987 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:42:47.426996 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 03:42:47.428865 update_engine[1484]: I20250430 03:42:47.428804 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:42:47.429340 update_engine[1484]: I20250430 03:42:47.429294 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:42:47.432605 update_engine[1484]: E20250430 03:42:47.432575 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:42:47.432951 update_engine[1484]: I20250430 03:42:47.432740 1484 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 03:42:48.156337 sshd[6733]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:48.169893 systemd[1]: sshd@13-135.181.100.111:22-139.178.68.195:34554.service: Deactivated successfully. Apr 30 03:42:48.175844 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:42:48.177973 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:42:48.179513 systemd-logind[1483]: Removed session 13. Apr 30 03:42:48.329840 systemd[1]: Started sshd@14-135.181.100.111:22-139.178.68.195:34560.service - OpenSSH per-connection server daemon (139.178.68.195:34560). Apr 30 03:42:49.340796 sshd[6748]: Accepted publickey for core from 139.178.68.195 port 34560 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:49.344447 sshd[6748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:49.366665 systemd-logind[1483]: New session 14 of user core. Apr 30 03:42:49.371845 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:42:50.414721 sshd[6748]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:50.421339 systemd[1]: sshd@14-135.181.100.111:22-139.178.68.195:34560.service: Deactivated successfully. Apr 30 03:42:50.425054 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:42:50.428259 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:42:50.431090 systemd-logind[1483]: Removed session 14. Apr 30 03:42:50.580756 systemd[1]: Started sshd@15-135.181.100.111:22-139.178.68.195:34564.service - OpenSSH per-connection server daemon (139.178.68.195:34564). Apr 30 03:42:51.574294 sshd[6787]: Accepted publickey for core from 139.178.68.195 port 34564 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:51.577920 sshd[6787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:51.586376 systemd-logind[1483]: New session 15 of user core. Apr 30 03:42:51.591835 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:42:54.353048 sshd[6787]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:54.370719 systemd[1]: sshd@15-135.181.100.111:22-139.178.68.195:34564.service: Deactivated successfully. Apr 30 03:42:54.374083 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:42:54.375754 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:42:54.377710 systemd-logind[1483]: Removed session 15. Apr 30 03:42:54.518420 systemd[1]: Started sshd@16-135.181.100.111:22-139.178.68.195:34572.service - OpenSSH per-connection server daemon (139.178.68.195:34572). Apr 30 03:42:55.549974 sshd[6805]: Accepted publickey for core from 139.178.68.195 port 34572 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:55.552456 sshd[6805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:55.561163 systemd-logind[1483]: New session 16 of user core. Apr 30 03:42:55.566088 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:42:56.815159 sshd[6805]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:56.821100 systemd[1]: sshd@16-135.181.100.111:22-139.178.68.195:34572.service: Deactivated successfully. Apr 30 03:42:56.825133 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:42:56.826389 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:42:56.828579 systemd-logind[1483]: Removed session 16. Apr 30 03:42:56.990365 systemd[1]: Started sshd@17-135.181.100.111:22-139.178.68.195:36548.service - OpenSSH per-connection server daemon (139.178.68.195:36548). Apr 30 03:42:57.334412 update_engine[1484]: I20250430 03:42:57.334303 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:42:57.335733 update_engine[1484]: I20250430 03:42:57.335279 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:42:57.338787 update_engine[1484]: I20250430 03:42:57.338490 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:42:57.339094 update_engine[1484]: E20250430 03:42:57.338965 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:42:57.339094 update_engine[1484]: I20250430 03:42:57.339059 1484 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 03:42:57.977274 sshd[6818]: Accepted publickey for core from 139.178.68.195 port 36548 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:42:57.979721 sshd[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:42:57.987236 systemd-logind[1483]: New session 17 of user core. Apr 30 03:42:57.996928 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:42:58.766448 sshd[6818]: pam_unix(sshd:session): session closed for user core Apr 30 03:42:58.772542 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:42:58.773135 systemd[1]: sshd@17-135.181.100.111:22-139.178.68.195:36548.service: Deactivated successfully. Apr 30 03:42:58.776851 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:42:58.782415 systemd-logind[1483]: Removed session 17. Apr 30 03:43:03.944141 systemd[1]: Started sshd@18-135.181.100.111:22-139.178.68.195:36560.service - OpenSSH per-connection server daemon (139.178.68.195:36560). Apr 30 03:43:04.953548 sshd[6835]: Accepted publickey for core from 139.178.68.195 port 36560 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:43:04.957684 sshd[6835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:43:04.968808 systemd-logind[1483]: New session 18 of user core. Apr 30 03:43:04.972974 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:43:05.781435 sshd[6835]: pam_unix(sshd:session): session closed for user core Apr 30 03:43:05.787322 systemd[1]: sshd@18-135.181.100.111:22-139.178.68.195:36560.service: Deactivated successfully. Apr 30 03:43:05.791070 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:43:05.792459 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:43:05.794291 systemd-logind[1483]: Removed session 18. Apr 30 03:43:07.335093 update_engine[1484]: I20250430 03:43:07.334993 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:43:07.335519 update_engine[1484]: I20250430 03:43:07.335355 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:43:07.335743 update_engine[1484]: I20250430 03:43:07.335704 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:43:07.336586 update_engine[1484]: E20250430 03:43:07.336529 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:43:07.336697 update_engine[1484]: I20250430 03:43:07.336660 1484 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 03:43:10.955044 systemd[1]: Started sshd@19-135.181.100.111:22-139.178.68.195:44582.service - OpenSSH per-connection server daemon (139.178.68.195:44582). Apr 30 03:43:11.680027 systemd[1]: Started sshd@20-135.181.100.111:22-167.94.145.99:50848.service - OpenSSH per-connection server daemon (167.94.145.99:50848). Apr 30 03:43:11.972456 sshd[6869]: Accepted publickey for core from 139.178.68.195 port 44582 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:43:11.975097 sshd[6869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:43:11.982941 systemd-logind[1483]: New session 19 of user core. Apr 30 03:43:11.989857 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:43:12.806328 sshd[6869]: pam_unix(sshd:session): session closed for user core Apr 30 03:43:12.811469 systemd[1]: sshd@19-135.181.100.111:22-139.178.68.195:44582.service: Deactivated successfully. Apr 30 03:43:12.814374 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:43:12.816046 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:43:12.817791 systemd-logind[1483]: Removed session 19. Apr 30 03:43:17.337861 update_engine[1484]: I20250430 03:43:17.337745 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:43:17.338760 update_engine[1484]: I20250430 03:43:17.338713 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:43:17.339098 update_engine[1484]: I20250430 03:43:17.339051 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:43:17.339898 update_engine[1484]: E20250430 03:43:17.339847 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:43:17.339986 update_engine[1484]: I20250430 03:43:17.339924 1484 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:43:17.343906 update_engine[1484]: I20250430 03:43:17.343842 1484 omaha_request_action.cc:617] Omaha request response: Apr 30 03:43:17.343998 update_engine[1484]: E20250430 03:43:17.343979 1484 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 03:43:17.344044 update_engine[1484]: I20250430 03:43:17.344017 1484 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 03:43:17.344044 update_engine[1484]: I20250430 03:43:17.344025 1484 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:43:17.344044 update_engine[1484]: I20250430 03:43:17.344034 1484 update_attempter.cc:306] Processing Done. Apr 30 03:43:17.344315 update_engine[1484]: E20250430 03:43:17.344059 1484 update_attempter.cc:619] Update failed. Apr 30 03:43:17.344315 update_engine[1484]: I20250430 03:43:17.344069 1484 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 03:43:17.344315 update_engine[1484]: I20250430 03:43:17.344079 1484 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 03:43:17.344315 update_engine[1484]: I20250430 03:43:17.344089 1484 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 03:43:17.344315 update_engine[1484]: I20250430 03:43:17.344244 1484 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:43:17.344315 update_engine[1484]: I20250430 03:43:17.344290 1484 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:43:17.344315 update_engine[1484]: I20250430 03:43:17.344305 1484 omaha_request_action.cc:272] Request: Apr 30 03:43:17.344315 update_engine[1484]: Apr 30 03:43:17.344315 update_engine[1484]: Apr 30 03:43:17.344315 update_engine[1484]: Apr 30 03:43:17.344315 update_engine[1484]: Apr 30 03:43:17.344315 update_engine[1484]: Apr 30 03:43:17.344315 update_engine[1484]: Apr 30 03:43:17.345088 update_engine[1484]: I20250430 03:43:17.344317 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:43:17.345088 update_engine[1484]: I20250430 03:43:17.344524 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:43:17.345088 update_engine[1484]: I20250430 03:43:17.344795 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:43:17.345563 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 03:43:17.346113 update_engine[1484]: E20250430 03:43:17.345602 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:43:17.346113 update_engine[1484]: I20250430 03:43:17.345692 1484 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:43:17.346113 update_engine[1484]: I20250430 03:43:17.345702 1484 omaha_request_action.cc:617] Omaha request response: Apr 30 03:43:17.346113 update_engine[1484]: I20250430 03:43:17.345713 1484 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:43:17.346113 update_engine[1484]: I20250430 03:43:17.345721 1484 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:43:17.346113 update_engine[1484]: I20250430 03:43:17.345729 1484 update_attempter.cc:306] Processing Done. Apr 30 03:43:17.346113 update_engine[1484]: I20250430 03:43:17.345783 1484 update_attempter.cc:310] Error event sent. Apr 30 03:43:17.346113 update_engine[1484]: I20250430 03:43:17.345804 1484 update_check_scheduler.cc:74] Next update check in 46m28s Apr 30 03:43:17.346631 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 03:43:26.855315 sshd[6872]: Connection closed by 167.94.145.99 port 50848 [preauth] Apr 30 03:43:26.856889 systemd[1]: sshd@20-135.181.100.111:22-167.94.145.99:50848.service: Deactivated successfully. Apr 30 03:43:28.753362 systemd[1]: cri-containerd-bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431.scope: Deactivated successfully. Apr 30 03:43:28.753662 systemd[1]: cri-containerd-bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431.scope: Consumed 1.801s CPU time, 18.3M memory peak, 0B memory swap peak. Apr 30 03:43:28.800533 kubelet[2782]: E0430 03:43:28.800268 2782 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53176->10.0.0.2:2379: read: connection timed out" Apr 30 03:43:28.880635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431-rootfs.mount: Deactivated successfully. Apr 30 03:43:28.905123 systemd[1]: cri-containerd-4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4.scope: Deactivated successfully. Apr 30 03:43:28.905393 systemd[1]: cri-containerd-4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4.scope: Consumed 5.685s CPU time. Apr 30 03:43:28.929233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4-rootfs.mount: Deactivated successfully. Apr 30 03:43:28.946644 containerd[1507]: time="2025-04-30T03:43:28.928344324Z" level=info msg="shim disconnected" id=4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4 namespace=k8s.io Apr 30 03:43:28.947150 containerd[1507]: time="2025-04-30T03:43:28.915002590Z" level=info msg="shim disconnected" id=bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431 namespace=k8s.io Apr 30 03:43:28.952948 containerd[1507]: time="2025-04-30T03:43:28.952796001Z" level=warning msg="cleaning up after shim disconnected" id=bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431 namespace=k8s.io Apr 30 03:43:28.952948 containerd[1507]: time="2025-04-30T03:43:28.952818422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:43:28.953742 containerd[1507]: time="2025-04-30T03:43:28.953702318Z" level=warning msg="cleaning up after shim disconnected" id=4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4 namespace=k8s.io Apr 30 03:43:28.953742 containerd[1507]: time="2025-04-30T03:43:28.953725831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:43:29.262920 kubelet[2782]: I0430 03:43:29.262860 2782 scope.go:117] "RemoveContainer" containerID="bc39dc4cb85182d9764018df43f6882ca98de1ccb4c58f271c742bc057b48431" Apr 30 03:43:29.263473 kubelet[2782]: I0430 03:43:29.263431 2782 scope.go:117] "RemoveContainer" containerID="4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4" Apr 30 03:43:29.280671 systemd[1]: cri-containerd-e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e.scope: Deactivated successfully. Apr 30 03:43:29.283366 systemd[1]: cri-containerd-e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e.scope: Consumed 6.395s CPU time, 24.7M memory peak, 0B memory swap peak. Apr 30 03:43:29.312595 containerd[1507]: time="2025-04-30T03:43:29.312549201Z" level=info msg="CreateContainer within sandbox \"a0b9c1a32aaf0ddabec60cac7e90c34361577771f36b00a22388eeea11c157fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:43:29.312979 containerd[1507]: time="2025-04-30T03:43:29.312679092Z" level=info msg="CreateContainer within sandbox \"096f471966e2374861ae0ce4d90ab4fdf014ab4960a09db38ead98dd780ec26e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 03:43:29.317013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e-rootfs.mount: Deactivated successfully. Apr 30 03:43:29.343982 containerd[1507]: time="2025-04-30T03:43:29.342286325Z" level=info msg="shim disconnected" id=e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e namespace=k8s.io Apr 30 03:43:29.343982 containerd[1507]: time="2025-04-30T03:43:29.342338012Z" level=warning msg="cleaning up after shim disconnected" id=e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e namespace=k8s.io Apr 30 03:43:29.343982 containerd[1507]: time="2025-04-30T03:43:29.342346718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:43:29.471906 containerd[1507]: time="2025-04-30T03:43:29.471839104Z" level=info msg="CreateContainer within sandbox \"a0b9c1a32aaf0ddabec60cac7e90c34361577771f36b00a22388eeea11c157fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2d2c7fa7f86a5ab00fe747b113056af53a0e820cab16ab2a7796e5891769aaa8\"" Apr 30 03:43:29.472541 containerd[1507]: time="2025-04-30T03:43:29.472413247Z" level=info msg="StartContainer for \"2d2c7fa7f86a5ab00fe747b113056af53a0e820cab16ab2a7796e5891769aaa8\"" Apr 30 03:43:29.486006 containerd[1507]: time="2025-04-30T03:43:29.485863002Z" level=info msg="CreateContainer within sandbox \"096f471966e2374861ae0ce4d90ab4fdf014ab4960a09db38ead98dd780ec26e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad\"" Apr 30 03:43:29.487521 containerd[1507]: time="2025-04-30T03:43:29.486769119Z" level=info msg="StartContainer for \"97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad\"" Apr 30 03:43:29.528393 systemd[1]: Started cri-containerd-97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad.scope - libcontainer container 97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad. Apr 30 03:43:29.550852 systemd[1]: Started cri-containerd-2d2c7fa7f86a5ab00fe747b113056af53a0e820cab16ab2a7796e5891769aaa8.scope - libcontainer container 2d2c7fa7f86a5ab00fe747b113056af53a0e820cab16ab2a7796e5891769aaa8. Apr 30 03:43:29.576833 containerd[1507]: time="2025-04-30T03:43:29.576791195Z" level=info msg="StartContainer for \"97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad\" returns successfully" Apr 30 03:43:29.593165 containerd[1507]: time="2025-04-30T03:43:29.593131070Z" level=info msg="StartContainer for \"2d2c7fa7f86a5ab00fe747b113056af53a0e820cab16ab2a7796e5891769aaa8\" returns successfully" Apr 30 03:43:29.887437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641329461.mount: Deactivated successfully. Apr 30 03:43:30.253631 kubelet[2782]: I0430 03:43:30.252849 2782 scope.go:117] "RemoveContainer" containerID="e9565a607a67ba3a3d91f1b912fde52c6716df11c3675b58e94920f7f3aeb65e" Apr 30 03:43:30.265531 containerd[1507]: time="2025-04-30T03:43:30.265481958Z" level=info msg="CreateContainer within sandbox \"fb9d5bec967f23c0b2a5f214a3a0f00182218747014d7ac6bb936591bb1a3208\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:43:30.289783 containerd[1507]: time="2025-04-30T03:43:30.289732090Z" level=info msg="CreateContainer within sandbox \"fb9d5bec967f23c0b2a5f214a3a0f00182218747014d7ac6bb936591bb1a3208\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c70a8febfa6021f8bece752b93d22d0cf1dfb0b16e3315f6a788b825368e00fd\"" Apr 30 03:43:30.292533 containerd[1507]: time="2025-04-30T03:43:30.292495987Z" level=info msg="StartContainer for \"c70a8febfa6021f8bece752b93d22d0cf1dfb0b16e3315f6a788b825368e00fd\"" Apr 30 03:43:30.350881 systemd[1]: Started cri-containerd-c70a8febfa6021f8bece752b93d22d0cf1dfb0b16e3315f6a788b825368e00fd.scope - libcontainer container c70a8febfa6021f8bece752b93d22d0cf1dfb0b16e3315f6a788b825368e00fd. Apr 30 03:43:30.399174 containerd[1507]: time="2025-04-30T03:43:30.399123517Z" level=info msg="StartContainer for \"c70a8febfa6021f8bece752b93d22d0cf1dfb0b16e3315f6a788b825368e00fd\" returns successfully" Apr 30 03:43:31.818925 systemd[1]: cri-containerd-97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad.scope: Deactivated successfully. Apr 30 03:43:31.851964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad-rootfs.mount: Deactivated successfully. Apr 30 03:43:31.863540 containerd[1507]: time="2025-04-30T03:43:31.863272004Z" level=info msg="shim disconnected" id=97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad namespace=k8s.io Apr 30 03:43:31.863540 containerd[1507]: time="2025-04-30T03:43:31.863352252Z" level=warning msg="cleaning up after shim disconnected" id=97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad namespace=k8s.io Apr 30 03:43:31.863540 containerd[1507]: time="2025-04-30T03:43:31.863367090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:43:32.276234 kubelet[2782]: I0430 03:43:32.276009 2782 scope.go:117] "RemoveContainer" containerID="4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4" Apr 30 03:43:32.277199 kubelet[2782]: I0430 03:43:32.276343 2782 scope.go:117] "RemoveContainer" containerID="97d6639508972069893e4c046e539e9e9bfe72175457701712511a11e2f0d4ad" Apr 30 03:43:32.290100 kubelet[2782]: E0430 03:43:32.280719 2782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-797db67f8-p6drq_tigera-operator(75fc7912-3bab-4151-8543-46293ee86019)\"" pod="tigera-operator/tigera-operator-797db67f8-p6drq" podUID="75fc7912-3bab-4151-8543-46293ee86019" Apr 30 03:43:32.311394 containerd[1507]: time="2025-04-30T03:43:32.311313768Z" level=info msg="RemoveContainer for \"4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4\"" Apr 30 03:43:32.317027 containerd[1507]: time="2025-04-30T03:43:32.316973979Z" level=info msg="RemoveContainer for \"4605fd658956c719840a86f282b3e5b8e1cf2cef06b8167b9c3c82ed63c697b4\" returns successfully" Apr 30 03:43:32.659239 kubelet[2782]: E0430 03:43:32.647861 2782 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52992->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-9-5ae3ade3a2.183afbbeb998f297 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-9-5ae3ade3a2,UID:56114d580898b5678668a645dd851985,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-9-5ae3ade3a2,},FirstTimestamp:2025-04-30 03:43:22.135655063 +0000 UTC m=+342.306186745,LastTimestamp:2025-04-30 03:43:22.135655063 +0000 UTC m=+342.306186745,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-9-5ae3ade3a2,}"