Apr 30 03:44:44.012345 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:44:44.012366 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:44:44.012374 kernel: BIOS-provided physical RAM map: Apr 30 03:44:44.012380 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:44:44.012385 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:44:44.012391 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:44:44.012397 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Apr 30 03:44:44.012403 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Apr 30 03:44:44.012410 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:44:44.012416 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 03:44:44.012421 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:44:44.012427 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:44:44.012432 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 03:44:44.012438 kernel: NX (Execute Disable) protection: active Apr 30 03:44:44.012447 kernel: APIC: Static calls initialized Apr 30 03:44:44.012453 kernel: SMBIOS 3.0.0 present. Apr 30 03:44:44.012459 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 30 03:44:44.012465 kernel: Hypervisor detected: KVM Apr 30 03:44:44.012471 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:44:44.012477 kernel: kvm-clock: using sched offset of 3274957024 cycles Apr 30 03:44:44.012484 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:44:44.012490 kernel: tsc: Detected 2495.312 MHz processor Apr 30 03:44:44.012497 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:44:44.012505 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:44:44.012511 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Apr 30 03:44:44.012517 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:44:44.012524 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:44:44.012530 kernel: Using GB pages for direct mapping Apr 30 03:44:44.012536 kernel: ACPI: Early table checksum verification disabled Apr 30 03:44:44.012542 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Apr 30 03:44:44.012549 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:44.012555 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:44.012563 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:44.012569 kernel: ACPI: FACS 0x000000007CFE0000 000040 Apr 30 03:44:44.012575 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:44.012581 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:44.012588 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:44.012594 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:44.012600 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Apr 30 03:44:44.012606 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Apr 30 03:44:44.012616 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Apr 30 03:44:44.012623 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Apr 30 03:44:44.012629 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Apr 30 03:44:44.012636 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Apr 30 03:44:44.012642 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Apr 30 03:44:44.012649 kernel: No NUMA configuration found Apr 30 03:44:44.012657 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Apr 30 03:44:44.012663 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Apr 30 03:44:44.012670 kernel: Zone ranges: Apr 30 03:44:44.012676 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:44:44.012683 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Apr 30 03:44:44.012689 kernel: Normal empty Apr 30 03:44:44.012696 kernel: Movable zone start for each node Apr 30 03:44:44.012702 kernel: Early memory node ranges Apr 30 03:44:44.012709 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:44:44.012715 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Apr 30 03:44:44.012723 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Apr 30 03:44:44.012729 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:44:44.012736 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:44:44.012742 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 03:44:44.012749 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:44:44.012755 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:44:44.012762 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:44:44.012768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:44:44.012786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:44:44.012794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:44:44.012800 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:44:44.012807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:44:44.012813 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:44:44.012820 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:44:44.012826 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:44:44.012833 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:44:44.012839 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 03:44:44.012846 kernel: Booting paravirtualized kernel on KVM Apr 30 03:44:44.012854 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:44:44.012861 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:44:44.012868 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:44:44.012874 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:44:44.012881 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:44:44.012887 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:44:44.012895 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:44:44.012902 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:44:44.012910 kernel: random: crng init done Apr 30 03:44:44.012916 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:44:44.012923 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:44:44.012929 kernel: Fallback order for Node 0: 0 Apr 30 03:44:44.012936 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Apr 30 03:44:44.012942 kernel: Policy zone: DMA32 Apr 30 03:44:44.012949 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:44:44.012956 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125152K reserved, 0K cma-reserved) Apr 30 03:44:44.012962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:44:44.012970 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:44:44.012976 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:44:44.012983 kernel: Dynamic Preempt: voluntary Apr 30 03:44:44.012990 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:44:44.012997 kernel: rcu: RCU event tracing is enabled. Apr 30 03:44:44.013004 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:44:44.013010 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:44:44.013017 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:44:44.013023 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:44:44.013052 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:44:44.013061 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:44:44.013068 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:44:44.013074 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:44:44.013081 kernel: Console: colour VGA+ 80x25 Apr 30 03:44:44.013087 kernel: printk: console [tty0] enabled Apr 30 03:44:44.013094 kernel: printk: console [ttyS0] enabled Apr 30 03:44:44.013100 kernel: ACPI: Core revision 20230628 Apr 30 03:44:44.013107 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:44:44.013114 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:44:44.013122 kernel: x2apic enabled Apr 30 03:44:44.013129 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:44:44.013136 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:44:44.013142 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:44:44.013149 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Apr 30 03:44:44.013155 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:44:44.013162 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:44:44.013169 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:44:44.013181 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:44:44.013188 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:44:44.013195 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:44:44.013203 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:44:44.013210 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:44:44.013217 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:44:44.013224 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:44:44.013231 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:44:44.013238 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:44:44.013246 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:44:44.013253 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:44:44.013260 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:44:44.013267 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:44:44.013274 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:44:44.013281 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:44:44.013288 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:44:44.013295 kernel: landlock: Up and running. Apr 30 03:44:44.013303 kernel: SELinux: Initializing. Apr 30 03:44:44.013310 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:44:44.013317 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:44:44.013324 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:44:44.013331 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:44:44.013338 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:44:44.013345 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:44:44.013352 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:44:44.013358 kernel: ... version: 0 Apr 30 03:44:44.013367 kernel: ... bit width: 48 Apr 30 03:44:44.013373 kernel: ... generic registers: 6 Apr 30 03:44:44.013380 kernel: ... value mask: 0000ffffffffffff Apr 30 03:44:44.013387 kernel: ... max period: 00007fffffffffff Apr 30 03:44:44.013394 kernel: ... fixed-purpose events: 0 Apr 30 03:44:44.013401 kernel: ... event mask: 000000000000003f Apr 30 03:44:44.013408 kernel: signal: max sigframe size: 1776 Apr 30 03:44:44.013414 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:44:44.013421 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:44:44.013430 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:44:44.013436 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:44:44.013443 kernel: .... node #0, CPUs: #1 Apr 30 03:44:44.013450 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:44:44.013457 kernel: smpboot: Max logical packages: 1 Apr 30 03:44:44.013464 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Apr 30 03:44:44.013471 kernel: devtmpfs: initialized Apr 30 03:44:44.013477 kernel: x86/mm: Memory block size: 128MB Apr 30 03:44:44.013485 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:44:44.013493 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:44:44.013500 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:44:44.013507 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:44:44.013513 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:44:44.013520 kernel: audit: type=2000 audit(1745984682.691:1): state=initialized audit_enabled=0 res=1 Apr 30 03:44:44.013527 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:44:44.013534 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:44:44.013541 kernel: cpuidle: using governor menu Apr 30 03:44:44.013548 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:44:44.013556 kernel: dca service started, version 1.12.1 Apr 30 03:44:44.013563 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:44:44.013570 kernel: PCI: Using configuration type 1 for base access Apr 30 03:44:44.013577 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:44:44.013584 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:44:44.013591 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:44:44.013597 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:44:44.013604 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:44:44.013611 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:44:44.013619 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:44:44.013626 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:44:44.013633 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:44:44.013640 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:44:44.013647 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:44:44.013653 kernel: ACPI: Interpreter enabled Apr 30 03:44:44.013660 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:44:44.013667 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:44:44.013674 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:44:44.013683 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:44:44.013690 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:44:44.013697 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:44:44.013828 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:44:44.013905 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:44:44.013974 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:44:44.013984 kernel: PCI host bridge to bus 0000:00 Apr 30 03:44:44.016075 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:44:44.016150 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:44:44.016214 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:44:44.016276 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Apr 30 03:44:44.016337 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:44:44.016398 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 03:44:44.016460 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:44:44.016547 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:44:44.016628 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:44:44.016701 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Apr 30 03:44:44.016790 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Apr 30 03:44:44.016863 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Apr 30 03:44:44.016934 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Apr 30 03:44:44.017006 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:44:44.017115 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.017188 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Apr 30 03:44:44.017268 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.017339 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Apr 30 03:44:44.017415 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.017486 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Apr 30 03:44:44.017568 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.017639 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Apr 30 03:44:44.017719 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.017807 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Apr 30 03:44:44.017886 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.017957 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Apr 30 03:44:44.018523 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.018606 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Apr 30 03:44:44.018684 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.018754 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Apr 30 03:44:44.018846 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:44.018919 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Apr 30 03:44:44.019001 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:44:44.019196 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:44:44.019280 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:44:44.019351 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Apr 30 03:44:44.019422 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Apr 30 03:44:44.019498 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:44:44.019590 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 03:44:44.019703 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:44:44.019838 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Apr 30 03:44:44.019941 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 03:44:44.020065 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Apr 30 03:44:44.020171 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:44:44.020267 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:44:44.020374 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:44:44.020499 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 03:44:44.020602 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Apr 30 03:44:44.020698 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:44:44.020812 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:44:44.020903 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:44:44.021010 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 03:44:44.021183 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Apr 30 03:44:44.021291 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Apr 30 03:44:44.021399 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:44:44.021485 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:44:44.021569 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:44:44.021666 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 03:44:44.021762 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 03:44:44.021883 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:44:44.021971 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:44:44.022089 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:44:44.022197 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 03:44:44.022284 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Apr 30 03:44:44.022369 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Apr 30 03:44:44.022455 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:44:44.025261 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:44:44.025341 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:44:44.025426 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 03:44:44.025502 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Apr 30 03:44:44.025574 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Apr 30 03:44:44.025649 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:44:44.025748 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:44:44.025842 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:44:44.025857 kernel: acpiphp: Slot [0] registered Apr 30 03:44:44.025979 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:44:44.026155 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Apr 30 03:44:44.026234 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Apr 30 03:44:44.026307 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Apr 30 03:44:44.026380 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:44:44.026451 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:44:44.026525 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:44:44.026534 kernel: acpiphp: Slot [0-2] registered Apr 30 03:44:44.026638 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:44:44.026733 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:44:44.026845 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:44:44.026859 kernel: acpiphp: Slot [0-3] registered Apr 30 03:44:44.026948 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:44:44.027022 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:44:44.027166 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:44:44.027180 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:44:44.027187 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:44:44.027195 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:44:44.027202 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:44:44.027209 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:44:44.027217 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:44:44.027224 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:44:44.027231 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:44:44.027239 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:44:44.027247 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:44:44.027254 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:44:44.027261 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:44:44.027269 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:44:44.027276 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:44:44.027283 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:44:44.027290 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:44:44.027297 kernel: iommu: Default domain type: Translated Apr 30 03:44:44.027306 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:44:44.027313 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:44:44.027320 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:44:44.027327 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:44:44.027335 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Apr 30 03:44:44.027410 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:44:44.027481 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:44:44.027549 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:44:44.027559 kernel: vgaarb: loaded Apr 30 03:44:44.027568 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:44:44.027576 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:44:44.027583 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:44:44.027590 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:44:44.027598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:44:44.027605 kernel: pnp: PnP ACPI init Apr 30 03:44:44.027684 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:44:44.027696 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:44:44.027705 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:44:44.027713 kernel: NET: Registered PF_INET protocol family Apr 30 03:44:44.027720 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:44:44.027728 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:44:44.027735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:44:44.027743 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:44:44.027750 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:44:44.027757 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:44:44.027764 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:44:44.027787 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:44:44.027795 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:44:44.027802 kernel: NET: Registered PF_XDP protocol family Apr 30 03:44:44.027875 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 03:44:44.027946 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 03:44:44.028016 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 03:44:44.028127 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 03:44:44.028203 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 03:44:44.028274 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 03:44:44.028348 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:44:44.028419 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:44:44.028489 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:44:44.028564 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:44:44.028659 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:44:44.028748 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:44:44.028867 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:44:44.028974 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:44:44.029093 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:44:44.029167 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:44:44.029236 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:44:44.029304 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:44:44.029372 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:44:44.029445 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:44:44.029579 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:44:44.029671 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:44:44.029741 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:44:44.029833 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:44:44.029904 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:44:44.029972 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 30 03:44:44.030137 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:44:44.032134 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:44:44.032222 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:44:44.032296 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 30 03:44:44.032372 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:44:44.032443 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:44:44.032553 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:44:44.032641 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 30 03:44:44.032713 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:44:44.032800 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:44:44.032877 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:44:44.032983 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:44:44.033065 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:44:44.033130 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Apr 30 03:44:44.033202 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:44:44.033265 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 03:44:44.033343 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 03:44:44.033411 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:44:44.033493 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 03:44:44.033596 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:44:44.033683 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 03:44:44.033751 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:44:44.033841 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 03:44:44.033908 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:44:44.033982 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 03:44:44.035535 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:44:44.035647 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 03:44:44.035720 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:44:44.035807 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 30 03:44:44.035886 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 03:44:44.035962 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:44:44.038610 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 30 03:44:44.038693 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Apr 30 03:44:44.038763 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:44:44.038850 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 30 03:44:44.038916 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 03:44:44.038980 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:44:44.038991 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:44:44.038999 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:44:44.039007 kernel: Initialise system trusted keyrings Apr 30 03:44:44.039015 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:44:44.039025 kernel: Key type asymmetric registered Apr 30 03:44:44.039045 kernel: Asymmetric key parser 'x509' registered Apr 30 03:44:44.039053 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:44:44.039061 kernel: io scheduler mq-deadline registered Apr 30 03:44:44.039069 kernel: io scheduler kyber registered Apr 30 03:44:44.039076 kernel: io scheduler bfq registered Apr 30 03:44:44.039155 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 03:44:44.039229 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 03:44:44.039302 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 03:44:44.039378 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 03:44:44.039451 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 03:44:44.039557 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 03:44:44.039645 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 03:44:44.039718 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 03:44:44.039805 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 03:44:44.039878 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 03:44:44.039950 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 03:44:44.040090 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 03:44:44.040191 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 03:44:44.040266 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 03:44:44.040339 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 03:44:44.040410 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 03:44:44.040420 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:44:44.040508 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 30 03:44:44.040609 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 30 03:44:44.040625 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:44:44.040633 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 30 03:44:44.040641 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:44:44.040648 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:44:44.040656 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:44:44.040664 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:44:44.040672 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:44:44.040752 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:44:44.040766 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:44:44.040851 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:44:44.040918 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:44:43 UTC (1745984683) Apr 30 03:44:44.040983 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:44:44.040992 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:44:44.041001 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:44:44.041008 kernel: Segment Routing with IPv6 Apr 30 03:44:44.041016 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:44:44.041023 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:44:44.041047 kernel: Key type dns_resolver registered Apr 30 03:44:44.041054 kernel: IPI shorthand broadcast: enabled Apr 30 03:44:44.041062 kernel: sched_clock: Marking stable (1354011936, 147047620)->(1589854796, -88795240) Apr 30 03:44:44.041070 kernel: registered taskstats version 1 Apr 30 03:44:44.041077 kernel: Loading compiled-in X.509 certificates Apr 30 03:44:44.041085 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:44:44.041092 kernel: Key type .fscrypt registered Apr 30 03:44:44.041100 kernel: Key type fscrypt-provisioning registered Apr 30 03:44:44.041109 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:44:44.041118 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:44:44.041129 kernel: ima: No architecture policies found Apr 30 03:44:44.041137 kernel: clk: Disabling unused clocks Apr 30 03:44:44.041144 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:44:44.041152 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:44:44.041159 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:44:44.041167 kernel: Run /init as init process Apr 30 03:44:44.041174 kernel: with arguments: Apr 30 03:44:44.041184 kernel: /init Apr 30 03:44:44.041191 kernel: with environment: Apr 30 03:44:44.041198 kernel: HOME=/ Apr 30 03:44:44.041205 kernel: TERM=linux Apr 30 03:44:44.041212 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:44:44.041222 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:44:44.041232 systemd[1]: Detected virtualization kvm. Apr 30 03:44:44.041241 systemd[1]: Detected architecture x86-64. Apr 30 03:44:44.041250 systemd[1]: Running in initrd. Apr 30 03:44:44.041258 systemd[1]: No hostname configured, using default hostname. Apr 30 03:44:44.041265 systemd[1]: Hostname set to . Apr 30 03:44:44.041274 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:44:44.041281 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:44:44.041289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:44:44.041297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:44:44.041306 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:44:44.041315 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:44:44.041323 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:44:44.041331 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:44:44.041340 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:44:44.041348 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:44:44.041356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:44:44.041364 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:44:44.041373 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:44:44.041381 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:44:44.041389 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:44:44.041397 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:44:44.041406 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:44:44.041413 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:44:44.041421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:44:44.041429 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:44:44.041437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:44:44.041447 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:44:44.041455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:44:44.041466 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:44:44.041478 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:44:44.041489 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:44:44.041501 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:44:44.041513 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:44:44.041524 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:44:44.041539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:44:44.041551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:44.041563 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:44:44.041604 systemd-journald[186]: Collecting audit messages is disabled. Apr 30 03:44:44.041627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:44:44.041635 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:44:44.041644 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:44:44.041652 systemd-journald[186]: Journal started Apr 30 03:44:44.041672 systemd-journald[186]: Runtime Journal (/run/log/journal/b2ffb074a4cd4d6e960a039f383bc01c) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:44:44.035695 systemd-modules-load[188]: Inserted module 'overlay' Apr 30 03:44:44.048044 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:44:44.067049 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:44:44.068573 systemd-modules-load[188]: Inserted module 'br_netfilter' Apr 30 03:44:44.091527 kernel: Bridge firewalling registered Apr 30 03:44:44.091046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:44:44.092866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:44.093458 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:44:44.099191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:44:44.100684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:44:44.104160 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:44:44.110610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:44:44.114324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:44:44.117835 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:44:44.119150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:44:44.120207 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:44:44.124169 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:44:44.126127 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:44:44.134112 dracut-cmdline[221]: dracut-dracut-053 Apr 30 03:44:44.136805 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:44:44.158526 systemd-resolved[223]: Positive Trust Anchors: Apr 30 03:44:44.159158 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:44:44.159189 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:44:44.162242 systemd-resolved[223]: Defaulting to hostname 'linux'. Apr 30 03:44:44.168519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:44:44.169226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:44:44.187066 kernel: SCSI subsystem initialized Apr 30 03:44:44.197069 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:44:44.207071 kernel: iscsi: registered transport (tcp) Apr 30 03:44:44.225228 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:44:44.225304 kernel: QLogic iSCSI HBA Driver Apr 30 03:44:44.253359 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:44:44.258186 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:44:44.293256 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:44:44.293353 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:44:44.295132 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:44:44.336085 kernel: raid6: avx2x4 gen() 29671 MB/s Apr 30 03:44:44.353071 kernel: raid6: avx2x2 gen() 30079 MB/s Apr 30 03:44:44.370303 kernel: raid6: avx2x1 gen() 24723 MB/s Apr 30 03:44:44.370383 kernel: raid6: using algorithm avx2x2 gen() 30079 MB/s Apr 30 03:44:44.389115 kernel: raid6: .... xor() 19432 MB/s, rmw enabled Apr 30 03:44:44.389220 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:44:44.410326 kernel: xor: automatically using best checksumming function avx Apr 30 03:44:44.558089 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:44:44.574897 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:44:44.581196 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:44:44.621002 systemd-udevd[406]: Using default interface naming scheme 'v255'. Apr 30 03:44:44.628912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:44:44.641260 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:44:44.667290 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 30 03:44:44.722213 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:44:44.727309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:44:44.787192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:44:44.797389 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:44:44.822196 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:44:44.825687 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:44:44.828397 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:44:44.830485 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:44:44.839214 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:44:44.864267 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:44:44.903992 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:44:44.913416 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 03:44:44.925092 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:44:44.930281 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:44:44.930474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:44:44.932426 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:44:44.933531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:44:44.933587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:44.935092 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:44.947213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:44.953014 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:44:44.955113 kernel: AES CTR mode by8 optimization enabled Apr 30 03:44:44.955136 kernel: ACPI: bus type USB registered Apr 30 03:44:44.973087 kernel: usbcore: registered new interface driver usbfs Apr 30 03:44:44.976120 kernel: libata version 3.00 loaded. Apr 30 03:44:44.984064 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:44:45.025101 kernel: usbcore: registered new interface driver hub Apr 30 03:44:45.025143 kernel: usbcore: registered new device driver usb Apr 30 03:44:45.025152 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:44:45.025163 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:44:45.025317 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:44:45.025440 kernel: scsi host1: ahci Apr 30 03:44:45.025624 kernel: scsi host2: ahci Apr 30 03:44:45.025812 kernel: scsi host3: ahci Apr 30 03:44:45.025933 kernel: scsi host4: ahci Apr 30 03:44:45.026393 kernel: scsi host5: ahci Apr 30 03:44:45.026548 kernel: scsi host6: ahci Apr 30 03:44:45.026637 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Apr 30 03:44:45.026647 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Apr 30 03:44:45.026656 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Apr 30 03:44:45.026665 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Apr 30 03:44:45.026674 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Apr 30 03:44:45.026686 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Apr 30 03:44:45.026695 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 30 03:44:45.031259 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 03:44:45.031388 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:44:45.031481 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 30 03:44:45.031600 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:44:45.031698 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:44:45.031720 kernel: GPT:17805311 != 80003071 Apr 30 03:44:45.031729 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:44:45.031739 kernel: GPT:17805311 != 80003071 Apr 30 03:44:45.031748 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:44:45.031757 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:45.031767 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:44:45.068472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:45.073181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:44:45.089242 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:44:45.340869 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:45.341016 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:45.341047 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:45.341087 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:44:45.345420 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:45.345462 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:44:45.347245 kernel: ata1.00: applying bridge limits Apr 30 03:44:45.349354 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:45.350073 kernel: ata1.00: configured for UDMA/100 Apr 30 03:44:45.351371 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:44:45.376325 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:44:45.402336 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 03:44:45.402516 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 03:44:45.402647 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:44:45.403288 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 03:44:45.403414 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 03:44:45.403537 kernel: hub 1-0:1.0: USB hub found Apr 30 03:44:45.403690 kernel: hub 1-0:1.0: 4 ports detected Apr 30 03:44:45.403830 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 03:44:45.403971 kernel: hub 2-0:1.0: USB hub found Apr 30 03:44:45.404129 kernel: hub 2-0:1.0: 4 ports detected Apr 30 03:44:45.420291 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:44:45.432267 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:44:45.432293 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:44:45.442103 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Apr 30 03:44:45.448221 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (457) Apr 30 03:44:45.459683 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 03:44:45.472643 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 03:44:45.482785 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 03:44:45.484690 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 03:44:45.492105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:44:45.512407 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:44:45.521495 disk-uuid[577]: Primary Header is updated. Apr 30 03:44:45.521495 disk-uuid[577]: Secondary Entries is updated. Apr 30 03:44:45.521495 disk-uuid[577]: Secondary Header is updated. Apr 30 03:44:45.537696 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:45.544153 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:45.559083 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:45.637088 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 03:44:45.775094 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:44:45.779542 kernel: usbcore: registered new interface driver usbhid Apr 30 03:44:45.779583 kernel: usbhid: USB HID core driver Apr 30 03:44:45.786876 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 30 03:44:45.786944 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 03:44:46.570091 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:46.570220 disk-uuid[579]: The operation has completed successfully. Apr 30 03:44:46.643571 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:44:46.643764 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:44:46.691667 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:44:46.698565 sh[599]: Success Apr 30 03:44:46.723097 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:44:46.799874 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:44:46.808228 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:44:46.812504 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:44:46.846422 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:44:46.846502 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:44:46.849253 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:44:46.849316 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:44:46.850539 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:44:46.863090 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:44:46.866011 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:44:46.868958 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:44:46.876335 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:44:46.881233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:44:46.905330 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:46.905412 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:44:46.905435 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:44:46.914967 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:44:46.915088 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:44:46.932673 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:44:46.939100 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:46.947202 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:44:46.954315 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:44:46.969875 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:44:46.998284 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:44:47.024301 systemd-networkd[780]: lo: Link UP Apr 30 03:44:47.024314 systemd-networkd[780]: lo: Gained carrier Apr 30 03:44:47.026256 systemd-networkd[780]: Enumeration completed Apr 30 03:44:47.026578 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:44:47.027101 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:47.027104 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:44:47.028336 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:47.028339 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:44:47.028930 systemd-networkd[780]: eth0: Link UP Apr 30 03:44:47.028933 systemd-networkd[780]: eth0: Gained carrier Apr 30 03:44:47.028939 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:47.037550 systemd[1]: Reached target network.target - Network. Apr 30 03:44:47.038749 systemd-networkd[780]: eth1: Link UP Apr 30 03:44:47.038754 systemd-networkd[780]: eth1: Gained carrier Apr 30 03:44:47.038788 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:47.077103 ignition[749]: Ignition 2.19.0 Apr 30 03:44:47.077121 ignition[749]: Stage: fetch-offline Apr 30 03:44:47.077172 ignition[749]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:47.077182 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:47.079588 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:44:47.077298 ignition[749]: parsed url from cmdline: "" Apr 30 03:44:47.077302 ignition[749]: no config URL provided Apr 30 03:44:47.077307 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:44:47.077315 ignition[749]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:44:47.077321 ignition[749]: failed to fetch config: resource requires networking Apr 30 03:44:47.077734 ignition[749]: Ignition finished successfully Apr 30 03:44:47.085185 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:44:47.086295 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:44:47.101461 ignition[788]: Ignition 2.19.0 Apr 30 03:44:47.101617 ignition[788]: Stage: fetch Apr 30 03:44:47.101852 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:47.101863 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:47.101969 ignition[788]: parsed url from cmdline: "" Apr 30 03:44:47.101973 ignition[788]: no config URL provided Apr 30 03:44:47.101978 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:44:47.101985 ignition[788]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:44:47.102012 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 03:44:47.102241 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 03:44:47.107082 systemd-networkd[780]: eth0: DHCPv4 address 157.180.64.98/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:44:47.303174 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 03:44:47.313156 ignition[788]: GET result: OK Apr 30 03:44:47.313318 ignition[788]: parsing config with SHA512: 365a7514379b99e25d6966d32f5800685387cbfa2961c75f89a2fec9f061c3110efdce0ecf3390d3661d554905d5712cdd7a6481c4352fd4004fd70efcbf31c4 Apr 30 03:44:47.321125 unknown[788]: fetched base config from "system" Apr 30 03:44:47.321142 unknown[788]: fetched base config from "system" Apr 30 03:44:47.323342 ignition[788]: fetch: fetch complete Apr 30 03:44:47.321152 unknown[788]: fetched user config from "hetzner" Apr 30 03:44:47.323353 ignition[788]: fetch: fetch passed Apr 30 03:44:47.326310 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:44:47.323448 ignition[788]: Ignition finished successfully Apr 30 03:44:47.335282 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:44:47.357269 ignition[796]: Ignition 2.19.0 Apr 30 03:44:47.357288 ignition[796]: Stage: kargs Apr 30 03:44:47.357577 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:47.357594 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:47.359303 ignition[796]: kargs: kargs passed Apr 30 03:44:47.360437 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:44:47.359364 ignition[796]: Ignition finished successfully Apr 30 03:44:47.375408 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:44:47.392003 ignition[802]: Ignition 2.19.0 Apr 30 03:44:47.393463 ignition[802]: Stage: disks Apr 30 03:44:47.393842 ignition[802]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:47.393860 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:47.396664 ignition[802]: disks: disks passed Apr 30 03:44:47.396742 ignition[802]: Ignition finished successfully Apr 30 03:44:47.398682 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:44:47.401483 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:44:47.402642 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:44:47.404589 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:44:47.406377 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:44:47.408131 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:44:47.416340 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:44:47.433695 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 03:44:47.437987 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:44:47.443212 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:44:47.541049 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:44:47.541374 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:44:47.542744 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:44:47.554164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:44:47.556726 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:44:47.559177 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:44:47.561872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:44:47.563658 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:44:47.572065 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (818) Apr 30 03:44:47.575261 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:44:47.579880 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:47.579905 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:44:47.579916 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:44:47.586041 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:44:47.586101 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:44:47.587229 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:44:47.592373 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:44:47.639061 coreos-metadata[820]: Apr 30 03:44:47.638 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 03:44:47.640676 coreos-metadata[820]: Apr 30 03:44:47.640 INFO Fetch successful Apr 30 03:44:47.642868 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:44:47.644491 coreos-metadata[820]: Apr 30 03:44:47.643 INFO wrote hostname ci-4081-3-3-b-745f04f342 to /sysroot/etc/hostname Apr 30 03:44:47.645552 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:44:47.648685 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:44:47.651888 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:44:47.655248 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:44:47.719582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:44:47.723134 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:44:47.727901 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:44:47.735076 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:47.756323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:44:47.758621 ignition[935]: INFO : Ignition 2.19.0 Apr 30 03:44:47.758621 ignition[935]: INFO : Stage: mount Apr 30 03:44:47.759912 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:47.759912 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:47.761482 ignition[935]: INFO : mount: mount passed Apr 30 03:44:47.761482 ignition[935]: INFO : Ignition finished successfully Apr 30 03:44:47.761831 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:44:47.773210 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:44:47.845467 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:44:47.852538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:44:47.878063 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (947) Apr 30 03:44:47.883692 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:47.883747 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:44:47.888491 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:44:47.893877 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:44:47.893928 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:44:47.898371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:44:47.926554 ignition[964]: INFO : Ignition 2.19.0 Apr 30 03:44:47.926554 ignition[964]: INFO : Stage: files Apr 30 03:44:47.926554 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:47.926554 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:47.926554 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:44:47.930484 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:44:47.930484 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:44:47.932877 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:44:47.933647 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:44:47.933647 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:44:47.933602 unknown[964]: wrote ssh authorized keys file for user: core Apr 30 03:44:47.936484 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:44:47.936484 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 03:44:48.137253 systemd-networkd[780]: eth0: Gained IPv6LL Apr 30 03:44:48.174178 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:44:48.585262 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:44:48.585262 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:44:48.590269 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 03:44:48.713560 systemd-networkd[780]: eth1: Gained IPv6LL Apr 30 03:44:49.236399 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:44:49.634491 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:44:49.634491 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:44:49.637979 ignition[964]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:44:49.637979 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:44:49.637979 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:44:49.637979 ignition[964]: INFO : files: files passed Apr 30 03:44:49.637979 ignition[964]: INFO : Ignition finished successfully Apr 30 03:44:49.638615 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:44:49.646221 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:44:49.651356 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:44:49.654492 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:44:49.654601 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:44:49.664586 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:44:49.665787 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:44:49.665787 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:44:49.666649 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:44:49.668801 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:44:49.675194 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:44:49.693693 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:44:49.693849 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:44:49.695273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:44:49.696148 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:44:49.697688 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:44:49.705234 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:44:49.721564 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:44:49.728219 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:44:49.743572 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:44:49.745853 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:44:49.746918 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:44:49.748483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:44:49.748655 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:44:49.750286 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:44:49.752167 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:44:49.753568 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:44:49.755015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:44:49.756663 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:44:49.758387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:44:49.759912 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:44:49.761523 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:44:49.763159 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:44:49.764618 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:44:49.765932 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:44:49.766253 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:44:49.767871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:44:49.769048 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:44:49.770811 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:44:49.770976 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:44:49.772451 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:44:49.772610 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:44:49.774486 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:44:49.774650 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:44:49.776602 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:44:49.776740 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:44:49.778010 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:44:49.778181 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:44:49.785712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:44:49.790426 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:44:49.791694 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:44:49.792062 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:44:49.795300 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:44:49.795513 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:44:49.807430 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:44:49.807551 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:44:49.811100 ignition[1017]: INFO : Ignition 2.19.0 Apr 30 03:44:49.811100 ignition[1017]: INFO : Stage: umount Apr 30 03:44:49.811100 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:49.811100 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:49.821794 ignition[1017]: INFO : umount: umount passed Apr 30 03:44:49.821794 ignition[1017]: INFO : Ignition finished successfully Apr 30 03:44:49.813342 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:44:49.813446 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:44:49.814723 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:44:49.814813 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:44:49.822588 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:44:49.822637 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:44:49.824152 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:44:49.824189 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:44:49.824786 systemd[1]: Stopped target network.target - Network. Apr 30 03:44:49.825968 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:44:49.826007 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:44:49.826524 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:44:49.829083 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:44:49.834113 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:44:49.834642 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:44:49.835874 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:44:49.836820 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:44:49.836852 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:44:49.837692 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:44:49.837721 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:44:49.838614 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:44:49.838649 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:44:49.839577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:44:49.839610 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:44:49.840652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:44:49.841908 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:44:49.844189 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:44:49.844663 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:44:49.844739 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:44:49.846340 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:44:49.846399 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:44:49.846466 systemd-networkd[780]: eth1: DHCPv6 lease lost Apr 30 03:44:49.849823 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:44:49.849950 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:44:49.850125 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 30 03:44:49.854224 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:44:49.854322 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:44:49.855651 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:44:49.855689 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:44:49.869271 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:44:49.869884 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:44:49.869957 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:44:49.870611 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:44:49.870657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:44:49.871360 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:44:49.871408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:44:49.872521 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:44:49.872566 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:44:49.873982 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:44:49.892822 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:44:49.893624 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:44:49.894807 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:44:49.894954 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:44:49.896528 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:44:49.896583 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:44:49.897966 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:44:49.897997 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:44:49.899222 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:44:49.899266 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:44:49.900970 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:44:49.901009 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:44:49.902274 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:44:49.902315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:44:49.908571 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:44:49.909921 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:44:49.910002 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:44:49.911010 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:44:49.911071 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:44:49.913147 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:44:49.913194 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:44:49.913784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:44:49.913827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:49.914967 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:44:49.915104 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:44:49.916782 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:44:49.922758 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:44:49.933819 systemd[1]: Switching root. Apr 30 03:44:49.987313 systemd-journald[186]: Journal stopped Apr 30 03:44:51.100343 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Apr 30 03:44:51.100409 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:44:51.100421 kernel: SELinux: policy capability open_perms=1 Apr 30 03:44:51.100430 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:44:51.100439 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:44:51.100452 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:44:51.100467 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:44:51.100481 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:44:51.100493 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:44:51.100506 kernel: audit: type=1403 audit(1745984690.166:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:44:51.100516 systemd[1]: Successfully loaded SELinux policy in 46.265ms. Apr 30 03:44:51.100536 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.775ms. Apr 30 03:44:51.100547 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:44:51.100557 systemd[1]: Detected virtualization kvm. Apr 30 03:44:51.100579 systemd[1]: Detected architecture x86-64. Apr 30 03:44:51.100588 systemd[1]: Detected first boot. Apr 30 03:44:51.100599 systemd[1]: Hostname set to . Apr 30 03:44:51.100609 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:44:51.100619 zram_generator::config[1060]: No configuration found. Apr 30 03:44:51.100630 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:44:51.100639 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:44:51.100649 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:44:51.100658 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:44:51.100668 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:44:51.100684 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:44:51.100693 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:44:51.100703 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:44:51.100713 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:44:51.100723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:44:51.100733 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:44:51.100742 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:44:51.100751 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:44:51.100761 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:44:51.100784 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:44:51.100794 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:44:51.100810 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:44:51.100820 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:44:51.100830 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:44:51.100839 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:44:51.100848 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:44:51.100860 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:44:51.100869 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:44:51.100879 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:44:51.100889 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:44:51.100898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:44:51.100908 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:44:51.100917 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:44:51.100927 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:44:51.100938 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:44:51.100947 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:44:51.100957 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:44:51.100967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:44:51.100976 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:44:51.100985 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:44:51.100995 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:44:51.101004 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:44:51.101025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:51.101054 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:44:51.101064 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:44:51.101074 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:44:51.101085 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:44:51.101095 systemd[1]: Reached target machines.target - Containers. Apr 30 03:44:51.101106 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:44:51.101116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:51.101126 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:44:51.101136 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:44:51.101146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:51.101155 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:44:51.101165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:51.101175 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:44:51.101185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:44:51.101196 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:44:51.101206 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:44:51.101215 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:44:51.101225 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:44:51.101235 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:44:51.101245 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:44:51.101261 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:44:51.101271 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:44:51.101281 kernel: ACPI: bus type drm_connector registered Apr 30 03:44:51.101293 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:44:51.101302 kernel: loop: module loaded Apr 30 03:44:51.101311 kernel: fuse: init (API version 7.39) Apr 30 03:44:51.101335 systemd-journald[1150]: Collecting audit messages is disabled. Apr 30 03:44:51.101356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:44:51.101368 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:44:51.101378 systemd-journald[1150]: Journal started Apr 30 03:44:51.101398 systemd-journald[1150]: Runtime Journal (/run/log/journal/b2ffb074a4cd4d6e960a039f383bc01c) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:44:50.769142 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:44:50.796351 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:44:50.797339 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:44:51.110052 systemd[1]: Stopped verity-setup.service. Apr 30 03:44:51.110125 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:51.115081 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:44:51.115612 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:44:51.116377 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:44:51.117164 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:44:51.117817 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:44:51.118488 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:44:51.119247 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:44:51.119964 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:44:51.120711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:44:51.121480 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:44:51.121653 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:44:51.122423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:51.122532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:51.123501 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:44:51.123658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:44:51.124338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:51.124489 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:51.125263 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:44:51.125368 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:44:51.126110 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:44:51.126267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:44:51.126968 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:44:51.127778 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:44:51.128504 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:44:51.137480 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:44:51.143624 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:44:51.148064 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:44:51.149005 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:44:51.149111 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:44:51.150506 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:44:51.153152 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:44:51.159192 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:44:51.160213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:51.169189 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:44:51.175193 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:44:51.176110 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:44:51.181010 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:44:51.181669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:44:51.188274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:44:51.190157 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:44:51.195362 systemd-journald[1150]: Time spent on flushing to /var/log/journal/b2ffb074a4cd4d6e960a039f383bc01c is 59.136ms for 1131 entries. Apr 30 03:44:51.195362 systemd-journald[1150]: System Journal (/var/log/journal/b2ffb074a4cd4d6e960a039f383bc01c) is 8.0M, max 584.8M, 576.8M free. Apr 30 03:44:51.276287 systemd-journald[1150]: Received client request to flush runtime journal. Apr 30 03:44:51.276322 kernel: loop0: detected capacity change from 0 to 142488 Apr 30 03:44:51.276335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:44:51.199126 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:44:51.203282 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:44:51.204799 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:44:51.206401 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:44:51.230107 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:44:51.239224 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:44:51.240895 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:44:51.243225 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:44:51.251326 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:44:51.263355 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:44:51.282624 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:44:51.286145 kernel: loop1: detected capacity change from 0 to 218376 Apr 30 03:44:51.295588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:44:51.298005 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Apr 30 03:44:51.298018 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Apr 30 03:44:51.305650 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:44:51.309085 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:44:51.309588 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:44:51.317697 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:44:51.343061 kernel: loop2: detected capacity change from 0 to 8 Apr 30 03:44:51.357546 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:44:51.364137 kernel: loop3: detected capacity change from 0 to 140768 Apr 30 03:44:51.364346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:44:51.382300 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Apr 30 03:44:51.382671 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Apr 30 03:44:51.388367 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:44:51.420096 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:44:51.442091 kernel: loop5: detected capacity change from 0 to 218376 Apr 30 03:44:51.470087 kernel: loop6: detected capacity change from 0 to 8 Apr 30 03:44:51.474151 kernel: loop7: detected capacity change from 0 to 140768 Apr 30 03:44:51.493388 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 03:44:51.495954 (sd-merge)[1209]: Merged extensions into '/usr'. Apr 30 03:44:51.502367 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:44:51.502520 systemd[1]: Reloading... Apr 30 03:44:51.593296 zram_generator::config[1238]: No configuration found. Apr 30 03:44:51.697002 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:44:51.754194 systemd[1]: Reloading finished in 251 ms. Apr 30 03:44:51.766592 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:44:51.779532 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:44:51.780746 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:44:51.791256 systemd[1]: Starting ensure-sysext.service... Apr 30 03:44:51.794891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:44:51.806216 systemd[1]: Reloading requested from client PID 1278 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:44:51.806352 systemd[1]: Reloading... Apr 30 03:44:51.821752 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:44:51.822446 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:44:51.823313 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:44:51.823633 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Apr 30 03:44:51.823745 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Apr 30 03:44:51.826692 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:44:51.826846 systemd-tmpfiles[1279]: Skipping /boot Apr 30 03:44:51.837097 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:44:51.837203 systemd-tmpfiles[1279]: Skipping /boot Apr 30 03:44:51.894055 zram_generator::config[1306]: No configuration found. Apr 30 03:44:52.011177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:44:52.064434 systemd[1]: Reloading finished in 257 ms. Apr 30 03:44:52.078883 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:44:52.083453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:44:52.095224 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:44:52.098174 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:44:52.103345 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:44:52.106103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:44:52.109225 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:44:52.111183 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:44:52.117984 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:52.119246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:52.126357 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:52.134156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:52.138236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:44:52.138806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:52.138912 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:52.141529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:52.141658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:52.142540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:52.142674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:52.149853 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:52.150020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:52.156312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:52.167306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:52.168258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:52.172265 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:44:52.172922 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:52.174120 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Apr 30 03:44:52.182186 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:52.182403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:52.189387 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:44:52.190715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:52.190927 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:52.192258 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:44:52.192676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:44:52.193957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:52.194253 augenrules[1382]: No rules Apr 30 03:44:52.194959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:52.195915 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:44:52.196789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:52.197306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:52.205411 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:44:52.209795 systemd[1]: Finished ensure-sysext.service. Apr 30 03:44:52.212539 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:44:52.212672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:44:52.215019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:44:52.215287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:44:52.222280 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:44:52.229092 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:44:52.235180 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:44:52.237085 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:44:52.242719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:44:52.246851 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:44:52.261354 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:44:52.262955 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:44:52.265845 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:44:52.330108 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:44:52.368490 systemd-networkd[1397]: lo: Link UP Apr 30 03:44:52.368505 systemd-networkd[1397]: lo: Gained carrier Apr 30 03:44:52.377961 systemd-networkd[1397]: Enumeration completed Apr 30 03:44:52.378133 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:44:52.386201 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:52.386212 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:44:52.387188 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:44:52.387897 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:52.387970 systemd-networkd[1397]: eth0: Link UP Apr 30 03:44:52.387974 systemd-networkd[1397]: eth0: Gained carrier Apr 30 03:44:52.387985 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:52.401061 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:44:52.402142 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:44:52.410638 systemd-resolved[1356]: Positive Trust Anchors: Apr 30 03:44:52.410654 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:44:52.410684 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:44:52.420021 systemd-resolved[1356]: Using system hostname 'ci-4081-3-3-b-745f04f342'. Apr 30 03:44:52.425832 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:44:52.427093 systemd[1]: Reached target network.target - Network. Apr 30 03:44:52.427508 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:44:52.439061 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:44:52.449552 systemd-networkd[1397]: eth0: DHCPv4 address 157.180.64.98/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:44:52.450266 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Apr 30 03:44:52.454128 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:44:52.461061 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:44:52.462976 systemd-networkd[1397]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:52.463071 systemd-networkd[1397]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:44:52.464108 systemd-networkd[1397]: eth1: Link UP Apr 30 03:44:52.464153 systemd-networkd[1397]: eth1: Gained carrier Apr 30 03:44:52.464201 systemd-networkd[1397]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:53.003473 systemd-resolved[1356]: Clock change detected. Flushing caches. Apr 30 03:44:53.003857 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1405) Apr 30 03:44:53.003971 systemd-timesyncd[1393]: Contacted time server 130.162.222.153:123 (0.flatcar.pool.ntp.org). Apr 30 03:44:53.004077 systemd-timesyncd[1393]: Initial clock synchronization to Wed 2025-04-30 03:44:53.003396 UTC. Apr 30 03:44:53.013398 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 03:44:53.013458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:53.013539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:53.014316 systemd-networkd[1397]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:44:53.017834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:53.019340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:53.021895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:44:53.022556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:53.022600 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:44:53.022614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:53.047994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:53.048216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:53.049688 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 30 03:44:53.054995 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 30 03:44:53.057681 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:44:53.060177 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:44:53.060212 kernel: [drm] features: -context_init Apr 30 03:44:53.058835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:53.059022 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:53.060610 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:44:53.061188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:44:53.061426 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:44:53.061479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:44:53.064717 kernel: [drm] number of scanouts: 1 Apr 30 03:44:53.065686 kernel: [drm] number of cap sets: 0 Apr 30 03:44:53.068682 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 03:44:53.081242 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:44:53.081558 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:44:53.081747 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:44:53.088802 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:44:53.088841 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 03:44:53.099751 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 03:44:53.113699 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:44:53.116700 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:44:53.128820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:53.143278 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:44:53.157058 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:44:53.165256 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:44:53.165473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:53.174891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:53.175593 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:44:53.220096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:53.310643 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:44:53.317036 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:44:53.346353 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:44:53.388378 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:44:53.389854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:44:53.390019 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:44:53.390293 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:44:53.390569 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:44:53.391006 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:44:53.391345 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:44:53.391529 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:44:53.391636 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:44:53.392211 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:44:53.393729 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:44:53.396364 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:44:53.399198 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:44:53.418930 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:44:53.423786 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:44:53.429293 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:44:53.430627 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:44:53.434134 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:44:53.436198 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:44:53.436468 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:44:53.440860 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:44:53.445330 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:44:53.458016 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:44:53.467013 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:44:53.473358 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:44:53.479881 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:44:53.481598 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:44:53.486952 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:44:53.498899 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:44:53.516992 coreos-metadata[1472]: Apr 30 03:44:53.514 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 03:44:53.514859 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 03:44:53.517719 jq[1474]: false Apr 30 03:44:53.521907 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:44:53.530911 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:44:53.533719 coreos-metadata[1472]: Apr 30 03:44:53.532 INFO Fetch successful Apr 30 03:44:53.533719 coreos-metadata[1472]: Apr 30 03:44:53.532 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 03:44:53.539555 coreos-metadata[1472]: Apr 30 03:44:53.539 INFO Fetch successful Apr 30 03:44:53.540200 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:44:53.541204 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:44:53.543793 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:44:53.550379 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:44:53.556964 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:44:53.561754 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:44:53.579026 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:44:53.579197 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:44:53.584443 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:44:53.592890 jq[1492]: true Apr 30 03:44:53.584602 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:44:53.600275 dbus-daemon[1473]: [system] SELinux support is enabled Apr 30 03:44:53.600808 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:44:53.603700 extend-filesystems[1476]: Found loop4 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found loop5 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found loop6 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found loop7 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found sda Apr 30 03:44:53.603700 extend-filesystems[1476]: Found sda1 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found sda2 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found sda3 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found usr Apr 30 03:44:53.603700 extend-filesystems[1476]: Found sda4 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found sda6 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found sda7 Apr 30 03:44:53.603700 extend-filesystems[1476]: Found sda9 Apr 30 03:44:53.603700 extend-filesystems[1476]: Checking size of /dev/sda9 Apr 30 03:44:53.642418 update_engine[1489]: I20250430 03:44:53.618700 1489 main.cc:92] Flatcar Update Engine starting Apr 30 03:44:53.642418 update_engine[1489]: I20250430 03:44:53.628014 1489 update_check_scheduler.cc:74] Next update check in 11m12s Apr 30 03:44:53.611788 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:44:53.618127 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:44:53.618167 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:44:53.648945 extend-filesystems[1476]: Resized partition /dev/sda9 Apr 30 03:44:53.661910 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:44:53.653151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:44:53.653181 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:44:53.657619 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:44:53.657637 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:44:53.659797 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:44:53.688009 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 03:44:53.682257 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:44:53.690444 jq[1501]: true Apr 30 03:44:53.697539 tar[1499]: linux-amd64/LICENSE Apr 30 03:44:53.697539 tar[1499]: linux-amd64/helm Apr 30 03:44:53.709943 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1405) Apr 30 03:44:53.700818 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:44:53.706048 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:44:53.711174 systemd-logind[1485]: New seat seat0. Apr 30 03:44:53.713934 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 03:44:53.713948 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:44:53.714134 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:44:53.831546 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:44:53.873872 bash[1545]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:44:53.875093 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:44:53.888275 systemd[1]: Starting sshkeys.service... Apr 30 03:44:53.922274 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:44:53.932596 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:44:53.949108 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 03:44:53.972478 extend-filesystems[1517]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 03:44:53.972478 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 03:44:53.972478 extend-filesystems[1517]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 03:44:53.989481 extend-filesystems[1476]: Resized filesystem in /dev/sda9 Apr 30 03:44:53.989481 extend-filesystems[1476]: Found sr0 Apr 30 03:44:53.995880 containerd[1500]: time="2025-04-30T03:44:53.987862932Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:44:53.997445 coreos-metadata[1555]: Apr 30 03:44:53.980 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 03:44:53.997445 coreos-metadata[1555]: Apr 30 03:44:53.988 INFO Fetch successful Apr 30 03:44:53.983155 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:44:53.983329 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:44:53.996444 unknown[1555]: wrote ssh authorized keys file for user: core Apr 30 03:44:54.026578 update-ssh-keys[1562]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:44:54.028796 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:44:54.033137 systemd[1]: Finished sshkeys.service. Apr 30 03:44:54.074995 containerd[1500]: time="2025-04-30T03:44:54.074693146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:54.078294 containerd[1500]: time="2025-04-30T03:44:54.078256508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:54.078294 containerd[1500]: time="2025-04-30T03:44:54.078288147Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:44:54.078361 containerd[1500]: time="2025-04-30T03:44:54.078303987Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:44:54.078563 containerd[1500]: time="2025-04-30T03:44:54.078480608Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:44:54.078563 containerd[1500]: time="2025-04-30T03:44:54.078501788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:54.078610 containerd[1500]: time="2025-04-30T03:44:54.078566039Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:54.078610 containerd[1500]: time="2025-04-30T03:44:54.078578302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.078755554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.078776533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.078788997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.078797683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.078867013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.079023807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.079109938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.079121220Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:44:54.079196 containerd[1500]: time="2025-04-30T03:44:54.079185971Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:44:54.079365 containerd[1500]: time="2025-04-30T03:44:54.079220846Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:44:54.085950 containerd[1500]: time="2025-04-30T03:44:54.085917544Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:44:54.086028 containerd[1500]: time="2025-04-30T03:44:54.085961837Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:44:54.086028 containerd[1500]: time="2025-04-30T03:44:54.085977937Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:44:54.086028 containerd[1500]: time="2025-04-30T03:44:54.085991162Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:44:54.086028 containerd[1500]: time="2025-04-30T03:44:54.086006731Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:44:54.086302 containerd[1500]: time="2025-04-30T03:44:54.086119893Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:44:54.086352 containerd[1500]: time="2025-04-30T03:44:54.086324948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:44:54.086455 containerd[1500]: time="2025-04-30T03:44:54.086434273Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:44:54.086479 containerd[1500]: time="2025-04-30T03:44:54.086454251Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:44:54.086479 containerd[1500]: time="2025-04-30T03:44:54.086465712Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:44:54.086479 containerd[1500]: time="2025-04-30T03:44:54.086477203Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:44:54.086534 containerd[1500]: time="2025-04-30T03:44:54.086489566Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:44:54.086534 containerd[1500]: time="2025-04-30T03:44:54.086501008Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:44:54.086534 containerd[1500]: time="2025-04-30T03:44:54.086513462Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:44:54.086534 containerd[1500]: time="2025-04-30T03:44:54.086530634Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:44:54.086596 containerd[1500]: time="2025-04-30T03:44:54.086542336Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:44:54.086596 containerd[1500]: time="2025-04-30T03:44:54.086558787Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:44:54.086596 containerd[1500]: time="2025-04-30T03:44:54.086571981Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:44:54.086596 containerd[1500]: time="2025-04-30T03:44:54.086590115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.086663 containerd[1500]: time="2025-04-30T03:44:54.086601426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.086663 containerd[1500]: time="2025-04-30T03:44:54.086612998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.086663 containerd[1500]: time="2025-04-30T03:44:54.086624700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.086663 containerd[1500]: time="2025-04-30T03:44:54.086635159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.086663 containerd[1500]: time="2025-04-30T03:44:54.086650228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086660698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086690022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086702005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086714779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086724687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086735518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086748812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086762308Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086781544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086792645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086802874Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086838611Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086852066Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:44:54.087004 containerd[1500]: time="2025-04-30T03:44:54.086861123Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:44:54.087232 containerd[1500]: time="2025-04-30T03:44:54.086871122Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:44:54.087232 containerd[1500]: time="2025-04-30T03:44:54.086879638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087232 containerd[1500]: time="2025-04-30T03:44:54.086890338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:44:54.087232 containerd[1500]: time="2025-04-30T03:44:54.086898884Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:44:54.087232 containerd[1500]: time="2025-04-30T03:44:54.086907520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:44:54.087321 containerd[1500]: time="2025-04-30T03:44:54.087153151Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:44:54.087321 containerd[1500]: time="2025-04-30T03:44:54.087200851Z" level=info msg="Connect containerd service" Apr 30 03:44:54.087321 containerd[1500]: time="2025-04-30T03:44:54.087232099Z" level=info msg="using legacy CRI server" Apr 30 03:44:54.087321 containerd[1500]: time="2025-04-30T03:44:54.087238130Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:44:54.087321 containerd[1500]: time="2025-04-30T03:44:54.087320444Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:44:54.099611 containerd[1500]: time="2025-04-30T03:44:54.098870133Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:44:54.101165 containerd[1500]: time="2025-04-30T03:44:54.101120382Z" level=info msg="Start subscribing containerd event" Apr 30 03:44:54.101351 containerd[1500]: time="2025-04-30T03:44:54.101286193Z" level=info msg="Start recovering state" Apr 30 03:44:54.101412 containerd[1500]: time="2025-04-30T03:44:54.101377094Z" level=info msg="Start event monitor" Apr 30 03:44:54.101688 containerd[1500]: time="2025-04-30T03:44:54.101534078Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:44:54.101688 containerd[1500]: time="2025-04-30T03:44:54.101599551Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:44:54.101731 containerd[1500]: time="2025-04-30T03:44:54.101688237Z" level=info msg="Start snapshots syncer" Apr 30 03:44:54.101731 containerd[1500]: time="2025-04-30T03:44:54.101700430Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:44:54.101731 containerd[1500]: time="2025-04-30T03:44:54.101707433Z" level=info msg="Start streaming server" Apr 30 03:44:54.103785 containerd[1500]: time="2025-04-30T03:44:54.101853036Z" level=info msg="containerd successfully booted in 0.126409s" Apr 30 03:44:54.101858 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:44:54.146584 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:44:54.165999 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:44:54.186949 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:44:54.195466 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:44:54.195911 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:44:54.208510 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:44:54.221250 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:44:54.231097 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:44:54.242225 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:44:54.244790 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:44:54.353877 systemd-networkd[1397]: eth0: Gained IPv6LL Apr 30 03:44:54.358922 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:44:54.363635 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:44:54.377811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:44:54.380092 tar[1499]: linux-amd64/README.md Apr 30 03:44:54.388801 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:44:54.412231 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:44:54.417171 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:44:54.418112 systemd-networkd[1397]: eth1: Gained IPv6LL Apr 30 03:44:55.565900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:44:55.569831 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:44:55.572802 systemd[1]: Startup finished in 1.538s (kernel) + 6.391s (initrd) + 4.931s (userspace) = 12.861s. Apr 30 03:44:55.575533 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:44:56.491411 kubelet[1604]: E0430 03:44:56.491241 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:44:56.495111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:44:56.495334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:44:56.495855 systemd[1]: kubelet.service: Consumed 1.472s CPU time. Apr 30 03:45:06.747551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:45:06.756339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:06.918212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:06.921418 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:06.996596 kubelet[1622]: E0430 03:45:06.996481 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:07.002896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:07.003211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:17.254125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:45:17.260968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:17.412838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:17.417411 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:17.484290 kubelet[1637]: E0430 03:45:17.484201 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:17.488891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:17.489160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:27.740746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:45:27.748095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:27.920057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:27.924315 (kubelet)[1651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:27.996997 kubelet[1651]: E0430 03:45:27.996766 1651 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:28.002218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:28.002574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:38.252953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 03:45:38.257880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:38.445017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:38.445452 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:38.510645 kubelet[1667]: E0430 03:45:38.510409 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:38.515028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:38.515240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:38.757623 update_engine[1489]: I20250430 03:45:38.757367 1489 update_attempter.cc:509] Updating boot flags... Apr 30 03:45:38.815775 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1683) Apr 30 03:45:38.873712 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1685) Apr 30 03:45:38.917749 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1685) Apr 30 03:45:48.646295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 03:45:48.654204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:48.782920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:48.785633 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:48.833132 kubelet[1703]: E0430 03:45:48.833027 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:48.835570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:48.835961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:58.895919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 03:45:58.901831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:59.016303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:59.027970 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:59.066202 kubelet[1719]: E0430 03:45:59.066105 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:59.067954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:59.068129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:09.146115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 03:46:09.157952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:09.281151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:09.293139 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:09.331364 kubelet[1735]: E0430 03:46:09.331288 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:09.334463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:09.334613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:19.396634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 03:46:19.404122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:19.540147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:19.543730 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:19.581046 kubelet[1751]: E0430 03:46:19.580963 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:19.584080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:19.584349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:29.646452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 03:46:29.658412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:29.832328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:29.837338 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:29.891925 kubelet[1766]: E0430 03:46:29.891805 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:29.896984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:29.897148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:37.821345 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:46:37.826970 systemd[1]: Started sshd@0-157.180.64.98:22-139.178.68.195:46564.service - OpenSSH per-connection server daemon (139.178.68.195:46564). Apr 30 03:46:38.800740 sshd[1774]: Accepted publickey for core from 139.178.68.195 port 46564 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:38.803370 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:38.819845 systemd-logind[1485]: New session 1 of user core. Apr 30 03:46:38.821838 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:46:38.840184 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:46:38.856188 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:46:38.862986 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:46:38.878993 (systemd)[1778]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:46:39.035599 systemd[1778]: Queued start job for default target default.target. Apr 30 03:46:39.042611 systemd[1778]: Created slice app.slice - User Application Slice. Apr 30 03:46:39.042636 systemd[1778]: Reached target paths.target - Paths. Apr 30 03:46:39.042648 systemd[1778]: Reached target timers.target - Timers. Apr 30 03:46:39.043882 systemd[1778]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:46:39.056519 systemd[1778]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:46:39.056630 systemd[1778]: Reached target sockets.target - Sockets. Apr 30 03:46:39.056643 systemd[1778]: Reached target basic.target - Basic System. Apr 30 03:46:39.056705 systemd[1778]: Reached target default.target - Main User Target. Apr 30 03:46:39.056735 systemd[1778]: Startup finished in 167ms. Apr 30 03:46:39.057174 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:46:39.065843 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:46:39.753283 systemd[1]: Started sshd@1-157.180.64.98:22-139.178.68.195:46568.service - OpenSSH per-connection server daemon (139.178.68.195:46568). Apr 30 03:46:40.145922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 03:46:40.152968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:40.292750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:40.295932 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:40.345236 kubelet[1798]: E0430 03:46:40.345148 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:40.349009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:40.349148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:40.749481 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 46568 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:40.752490 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:40.760356 systemd-logind[1485]: New session 2 of user core. Apr 30 03:46:40.771140 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:46:41.429384 sshd[1789]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:41.436437 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:46:41.437908 systemd[1]: sshd@1-157.180.64.98:22-139.178.68.195:46568.service: Deactivated successfully. Apr 30 03:46:41.441484 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:46:41.443858 systemd-logind[1485]: Removed session 2. Apr 30 03:46:41.604251 systemd[1]: Started sshd@2-157.180.64.98:22-139.178.68.195:46576.service - OpenSSH per-connection server daemon (139.178.68.195:46576). Apr 30 03:46:42.594142 sshd[1811]: Accepted publickey for core from 139.178.68.195 port 46576 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:42.595953 sshd[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:42.602052 systemd-logind[1485]: New session 3 of user core. Apr 30 03:46:42.612973 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:46:43.268115 sshd[1811]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:43.272817 systemd[1]: sshd@2-157.180.64.98:22-139.178.68.195:46576.service: Deactivated successfully. Apr 30 03:46:43.275275 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:46:43.276000 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:46:43.277114 systemd-logind[1485]: Removed session 3. Apr 30 03:46:43.437957 systemd[1]: Started sshd@3-157.180.64.98:22-139.178.68.195:46586.service - OpenSSH per-connection server daemon (139.178.68.195:46586). Apr 30 03:46:44.408514 sshd[1818]: Accepted publickey for core from 139.178.68.195 port 46586 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:44.410116 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:44.416744 systemd-logind[1485]: New session 4 of user core. Apr 30 03:46:44.427923 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:46:45.082415 sshd[1818]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:45.087103 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:46:45.088109 systemd[1]: sshd@3-157.180.64.98:22-139.178.68.195:46586.service: Deactivated successfully. Apr 30 03:46:45.090584 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:46:45.092027 systemd-logind[1485]: Removed session 4. Apr 30 03:46:45.253475 systemd[1]: Started sshd@4-157.180.64.98:22-139.178.68.195:55996.service - OpenSSH per-connection server daemon (139.178.68.195:55996). Apr 30 03:46:46.223613 sshd[1825]: Accepted publickey for core from 139.178.68.195 port 55996 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:46.226977 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:46.234083 systemd-logind[1485]: New session 5 of user core. Apr 30 03:46:46.242818 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:46:46.755668 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:46:46.756253 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:46:46.773706 sudo[1828]: pam_unix(sudo:session): session closed for user root Apr 30 03:46:46.932141 sshd[1825]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:46.938145 systemd[1]: sshd@4-157.180.64.98:22-139.178.68.195:55996.service: Deactivated successfully. Apr 30 03:46:46.941364 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:46:46.944385 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:46:46.946495 systemd-logind[1485]: Removed session 5. Apr 30 03:46:47.106594 systemd[1]: Started sshd@5-157.180.64.98:22-139.178.68.195:55998.service - OpenSSH per-connection server daemon (139.178.68.195:55998). Apr 30 03:46:48.090212 sshd[1833]: Accepted publickey for core from 139.178.68.195 port 55998 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:48.092213 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:48.098152 systemd-logind[1485]: New session 6 of user core. Apr 30 03:46:48.103975 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:46:48.612018 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:46:48.612552 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:46:48.616876 sudo[1837]: pam_unix(sudo:session): session closed for user root Apr 30 03:46:48.623009 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:46:48.623300 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:46:48.646130 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:46:48.651359 auditctl[1840]: No rules Apr 30 03:46:48.651997 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:46:48.652278 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:46:48.659182 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:46:48.706996 augenrules[1858]: No rules Apr 30 03:46:48.708142 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:46:48.710069 sudo[1836]: pam_unix(sudo:session): session closed for user root Apr 30 03:46:48.868956 sshd[1833]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:48.873921 systemd[1]: sshd@5-157.180.64.98:22-139.178.68.195:55998.service: Deactivated successfully. Apr 30 03:46:48.876192 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:46:48.878171 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:46:48.880042 systemd-logind[1485]: Removed session 6. Apr 30 03:46:49.036572 systemd[1]: Started sshd@6-157.180.64.98:22-139.178.68.195:56012.service - OpenSSH per-connection server daemon (139.178.68.195:56012). Apr 30 03:46:50.010908 sshd[1866]: Accepted publickey for core from 139.178.68.195 port 56012 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:50.012800 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:50.017772 systemd-logind[1485]: New session 7 of user core. Apr 30 03:46:50.026859 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:46:50.396295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 03:46:50.404182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:50.526789 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:46:50.527037 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:46:50.562998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:50.563403 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:50.608074 kubelet[1881]: E0430 03:46:50.608018 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:50.610495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:50.610751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:50.830295 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:46:50.831263 (dockerd)[1900]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:46:51.097362 systemd[1]: Started sshd@7-157.180.64.98:22-92.118.39.57:47724.service - OpenSSH per-connection server daemon (92.118.39.57:47724). Apr 30 03:46:51.154030 sshd[1906]: Connection closed by 92.118.39.57 port 47724 Apr 30 03:46:51.157389 systemd[1]: sshd@7-157.180.64.98:22-92.118.39.57:47724.service: Deactivated successfully. Apr 30 03:46:51.266209 dockerd[1900]: time="2025-04-30T03:46:51.265725635Z" level=info msg="Starting up" Apr 30 03:46:51.387767 systemd[1]: var-lib-docker-metacopy\x2dcheck1607034756-merged.mount: Deactivated successfully. Apr 30 03:46:51.426775 dockerd[1900]: time="2025-04-30T03:46:51.426320714Z" level=info msg="Loading containers: start." Apr 30 03:46:51.573712 kernel: Initializing XFRM netlink socket Apr 30 03:46:51.674251 systemd-networkd[1397]: docker0: Link UP Apr 30 03:46:51.693271 dockerd[1900]: time="2025-04-30T03:46:51.693195433Z" level=info msg="Loading containers: done." Apr 30 03:46:51.715492 dockerd[1900]: time="2025-04-30T03:46:51.715334938Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:46:51.715787 dockerd[1900]: time="2025-04-30T03:46:51.715571712Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:46:51.715851 dockerd[1900]: time="2025-04-30T03:46:51.715783079Z" level=info msg="Daemon has completed initialization" Apr 30 03:46:51.756595 dockerd[1900]: time="2025-04-30T03:46:51.756513220Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:46:51.757083 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:46:53.052233 containerd[1500]: time="2025-04-30T03:46:53.052066307Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 03:46:53.643499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287663665.mount: Deactivated successfully. Apr 30 03:46:54.597650 containerd[1500]: time="2025-04-30T03:46:54.597586517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:54.598981 containerd[1500]: time="2025-04-30T03:46:54.598941906Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682973" Apr 30 03:46:54.600345 containerd[1500]: time="2025-04-30T03:46:54.600271167Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:54.602983 containerd[1500]: time="2025-04-30T03:46:54.602936722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:54.604527 containerd[1500]: time="2025-04-30T03:46:54.603981831Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.55187587s" Apr 30 03:46:54.604527 containerd[1500]: time="2025-04-30T03:46:54.604017498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 03:46:54.604908 containerd[1500]: time="2025-04-30T03:46:54.604758497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 03:46:55.882193 containerd[1500]: time="2025-04-30T03:46:55.882120133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:55.883808 containerd[1500]: time="2025-04-30T03:46:55.883759385Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779611" Apr 30 03:46:55.885294 containerd[1500]: time="2025-04-30T03:46:55.885245610Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:55.888856 containerd[1500]: time="2025-04-30T03:46:55.888816652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:55.890622 containerd[1500]: time="2025-04-30T03:46:55.890314679Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.285509825s" Apr 30 03:46:55.890622 containerd[1500]: time="2025-04-30T03:46:55.890375914Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 03:46:55.891805 containerd[1500]: time="2025-04-30T03:46:55.891760588Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 03:46:57.020715 containerd[1500]: time="2025-04-30T03:46:57.020641081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:57.022287 containerd[1500]: time="2025-04-30T03:46:57.022239276Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169960" Apr 30 03:46:57.025552 containerd[1500]: time="2025-04-30T03:46:57.025266579Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:57.029225 containerd[1500]: time="2025-04-30T03:46:57.029189702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:57.030485 containerd[1500]: time="2025-04-30T03:46:57.030458590Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.138659699s" Apr 30 03:46:57.030539 containerd[1500]: time="2025-04-30T03:46:57.030488526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 03:46:57.031570 containerd[1500]: time="2025-04-30T03:46:57.031550155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 03:46:58.085223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176972557.mount: Deactivated successfully. Apr 30 03:46:58.466471 containerd[1500]: time="2025-04-30T03:46:58.466356229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:58.467588 containerd[1500]: time="2025-04-30T03:46:58.467505443Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917884" Apr 30 03:46:58.468479 containerd[1500]: time="2025-04-30T03:46:58.468412102Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:58.471185 containerd[1500]: time="2025-04-30T03:46:58.471142939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:58.472218 containerd[1500]: time="2025-04-30T03:46:58.471868098Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.44028971s" Apr 30 03:46:58.472218 containerd[1500]: time="2025-04-30T03:46:58.471909455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 03:46:58.472524 containerd[1500]: time="2025-04-30T03:46:58.472474083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 03:46:58.968564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624201441.mount: Deactivated successfully. Apr 30 03:46:59.813625 containerd[1500]: time="2025-04-30T03:46:59.813550282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:59.814904 containerd[1500]: time="2025-04-30T03:46:59.814856600Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Apr 30 03:46:59.815724 containerd[1500]: time="2025-04-30T03:46:59.815640148Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:59.818823 containerd[1500]: time="2025-04-30T03:46:59.818775575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:59.820204 containerd[1500]: time="2025-04-30T03:46:59.820076793Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.347567734s" Apr 30 03:46:59.820204 containerd[1500]: time="2025-04-30T03:46:59.820137687Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 03:46:59.823101 containerd[1500]: time="2025-04-30T03:46:59.823067097Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:47:00.307197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503376240.mount: Deactivated successfully. Apr 30 03:47:00.317361 containerd[1500]: time="2025-04-30T03:47:00.317298865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:00.318487 containerd[1500]: time="2025-04-30T03:47:00.318443221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 30 03:47:00.320187 containerd[1500]: time="2025-04-30T03:47:00.320141332Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:00.323642 containerd[1500]: time="2025-04-30T03:47:00.323591609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:00.325384 containerd[1500]: time="2025-04-30T03:47:00.324705807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.567727ms" Apr 30 03:47:00.325384 containerd[1500]: time="2025-04-30T03:47:00.324749429Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:47:00.325697 containerd[1500]: time="2025-04-30T03:47:00.325651550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 03:47:00.646033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 03:47:00.652525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:47:00.786180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:47:00.790242 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:47:00.822526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050358354.mount: Deactivated successfully. Apr 30 03:47:00.844431 kubelet[2177]: E0430 03:47:00.844359 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:47:00.848139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:47:00.848308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:47:02.384774 containerd[1500]: time="2025-04-30T03:47:02.384684460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:02.386235 containerd[1500]: time="2025-04-30T03:47:02.386190082Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551430" Apr 30 03:47:02.386929 containerd[1500]: time="2025-04-30T03:47:02.386880336Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:02.390910 containerd[1500]: time="2025-04-30T03:47:02.390514155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:02.393080 containerd[1500]: time="2025-04-30T03:47:02.393048426Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.06725973s" Apr 30 03:47:02.393320 containerd[1500]: time="2025-04-30T03:47:02.393159112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 03:47:05.026256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:47:05.033010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:47:05.076462 systemd[1]: Reloading requested from client PID 2266 ('systemctl') (unit session-7.scope)... Apr 30 03:47:05.076614 systemd[1]: Reloading... Apr 30 03:47:05.179710 zram_generator::config[2303]: No configuration found. Apr 30 03:47:05.308662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:47:05.395875 systemd[1]: Reloading finished in 318 ms. Apr 30 03:47:05.441218 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:47:05.441310 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:47:05.441596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:47:05.446984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:47:05.584915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:47:05.586449 (kubelet)[2360]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:47:05.648377 kubelet[2360]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:47:05.648377 kubelet[2360]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:47:05.648377 kubelet[2360]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:47:05.648884 kubelet[2360]: I0430 03:47:05.648479 2360 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:47:05.826345 kubelet[2360]: I0430 03:47:05.826269 2360 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:47:05.826345 kubelet[2360]: I0430 03:47:05.826312 2360 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:47:05.826846 kubelet[2360]: I0430 03:47:05.826796 2360 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:47:05.883528 kubelet[2360]: I0430 03:47:05.882760 2360 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:47:05.886174 kubelet[2360]: E0430 03:47:05.886096 2360 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://157.180.64.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:05.901691 kubelet[2360]: E0430 03:47:05.900858 2360 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:47:05.901691 kubelet[2360]: I0430 03:47:05.900897 2360 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:47:05.906842 kubelet[2360]: I0430 03:47:05.906824 2360 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:47:05.911265 kubelet[2360]: I0430 03:47:05.911220 2360 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:47:05.911519 kubelet[2360]: I0430 03:47:05.911263 2360 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-b-745f04f342","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:47:05.914016 kubelet[2360]: I0430 03:47:05.913984 2360 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:47:05.914016 kubelet[2360]: I0430 03:47:05.914011 2360 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:47:05.914184 kubelet[2360]: I0430 03:47:05.914166 2360 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:47:05.920172 kubelet[2360]: I0430 03:47:05.920151 2360 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:47:05.920219 kubelet[2360]: I0430 03:47:05.920178 2360 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:47:05.920219 kubelet[2360]: I0430 03:47:05.920201 2360 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:47:05.920219 kubelet[2360]: I0430 03:47:05.920215 2360 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:47:05.930484 kubelet[2360]: W0430 03:47:05.930413 2360 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.64.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-b-745f04f342&limit=500&resourceVersion=0": dial tcp 157.180.64.98:6443: connect: connection refused Apr 30 03:47:05.930991 kubelet[2360]: E0430 03:47:05.930656 2360 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.64.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-b-745f04f342&limit=500&resourceVersion=0\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:05.930991 kubelet[2360]: W0430 03:47:05.930732 2360 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.64.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.64.98:6443: connect: connection refused Apr 30 03:47:05.930991 kubelet[2360]: E0430 03:47:05.930767 2360 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.64.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:05.930991 kubelet[2360]: I0430 03:47:05.930904 2360 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:47:05.934708 kubelet[2360]: I0430 03:47:05.934609 2360 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:47:05.935545 kubelet[2360]: W0430 03:47:05.935460 2360 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:47:05.936190 kubelet[2360]: I0430 03:47:05.936171 2360 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:47:05.936254 kubelet[2360]: I0430 03:47:05.936202 2360 server.go:1287] "Started kubelet" Apr 30 03:47:05.937431 kubelet[2360]: I0430 03:47:05.936575 2360 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:47:05.938568 kubelet[2360]: I0430 03:47:05.938052 2360 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:47:05.939949 kubelet[2360]: I0430 03:47:05.939690 2360 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:47:05.939949 kubelet[2360]: I0430 03:47:05.939896 2360 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:47:05.943482 kubelet[2360]: I0430 03:47:05.943469 2360 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:47:05.944358 kubelet[2360]: E0430 03:47:05.941061 2360 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.64.98:6443/api/v1/namespaces/default/events\": dial tcp 157.180.64.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-b-745f04f342.183afbf2d525fe63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-b-745f04f342,UID:ci-4081-3-3-b-745f04f342,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-b-745f04f342,},FirstTimestamp:2025-04-30 03:47:05.936182883 +0000 UTC m=+0.346129847,LastTimestamp:2025-04-30 03:47:05.936182883 +0000 UTC m=+0.346129847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-b-745f04f342,}" Apr 30 03:47:05.945130 kubelet[2360]: I0430 03:47:05.944636 2360 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:47:05.948044 kubelet[2360]: E0430 03:47:05.948018 2360 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-745f04f342\" not found" Apr 30 03:47:05.948380 kubelet[2360]: I0430 03:47:05.948370 2360 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:47:05.950430 kubelet[2360]: I0430 03:47:05.950126 2360 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:47:05.950430 kubelet[2360]: I0430 03:47:05.950184 2360 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:47:05.953163 kubelet[2360]: W0430 03:47:05.953111 2360 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.64.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.64.98:6443: connect: connection refused Apr 30 03:47:05.953225 kubelet[2360]: E0430 03:47:05.953175 2360 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.64.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:05.953593 kubelet[2360]: I0430 03:47:05.953323 2360 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:47:05.953593 kubelet[2360]: I0430 03:47:05.953417 2360 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:47:05.956756 kubelet[2360]: E0430 03:47:05.956730 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.64.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-b-745f04f342?timeout=10s\": dial tcp 157.180.64.98:6443: connect: connection refused" interval="200ms" Apr 30 03:47:05.959005 kubelet[2360]: E0430 03:47:05.958979 2360 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:47:05.959309 kubelet[2360]: I0430 03:47:05.959106 2360 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:47:05.973013 kubelet[2360]: I0430 03:47:05.972952 2360 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:47:05.974692 kubelet[2360]: I0430 03:47:05.974478 2360 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:47:05.974692 kubelet[2360]: I0430 03:47:05.974498 2360 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:47:05.974692 kubelet[2360]: I0430 03:47:05.974517 2360 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:47:05.974692 kubelet[2360]: I0430 03:47:05.974527 2360 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:47:05.974692 kubelet[2360]: E0430 03:47:05.974576 2360 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:47:05.982738 kubelet[2360]: W0430 03:47:05.982713 2360 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.64.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.64.98:6443: connect: connection refused Apr 30 03:47:05.982887 kubelet[2360]: E0430 03:47:05.982871 2360 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.64.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:05.984951 kubelet[2360]: I0430 03:47:05.984941 2360 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:47:05.985037 kubelet[2360]: I0430 03:47:05.985030 2360 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:47:05.985250 kubelet[2360]: I0430 03:47:05.985074 2360 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:47:05.988083 kubelet[2360]: I0430 03:47:05.988073 2360 policy_none.go:49] "None policy: Start" Apr 30 03:47:05.988139 kubelet[2360]: I0430 03:47:05.988133 2360 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:47:05.988178 kubelet[2360]: I0430 03:47:05.988173 2360 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:47:05.993404 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:47:06.013884 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:47:06.017549 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:47:06.028546 kubelet[2360]: I0430 03:47:06.028274 2360 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:47:06.028719 kubelet[2360]: I0430 03:47:06.028703 2360 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:47:06.028778 kubelet[2360]: I0430 03:47:06.028717 2360 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:47:06.029103 kubelet[2360]: I0430 03:47:06.028917 2360 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:47:06.030479 kubelet[2360]: E0430 03:47:06.030345 2360 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:47:06.030776 kubelet[2360]: E0430 03:47:06.030757 2360 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-b-745f04f342\" not found" Apr 30 03:47:06.089162 systemd[1]: Created slice kubepods-burstable-podbeadcbfe98d763a38c7be325fa9bca59.slice - libcontainer container kubepods-burstable-podbeadcbfe98d763a38c7be325fa9bca59.slice. Apr 30 03:47:06.109073 kubelet[2360]: E0430 03:47:06.109052 2360 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.112865 systemd[1]: Created slice kubepods-burstable-pod74621422a0db89f7c5052da44cce62cf.slice - libcontainer container kubepods-burstable-pod74621422a0db89f7c5052da44cce62cf.slice. Apr 30 03:47:06.121633 kubelet[2360]: E0430 03:47:06.121491 2360 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.126235 systemd[1]: Created slice kubepods-burstable-pod064849afdade165c4c94a943f6accc0b.slice - libcontainer container kubepods-burstable-pod064849afdade165c4c94a943f6accc0b.slice. Apr 30 03:47:06.128456 kubelet[2360]: E0430 03:47:06.128418 2360 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.132484 kubelet[2360]: I0430 03:47:06.132005 2360 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.132785 kubelet[2360]: E0430 03:47:06.132724 2360 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://157.180.64.98:6443/api/v1/nodes\": dial tcp 157.180.64.98:6443: connect: connection refused" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.157912 kubelet[2360]: E0430 03:47:06.157845 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.64.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-b-745f04f342?timeout=10s\": dial tcp 157.180.64.98:6443: connect: connection refused" interval="400ms" Apr 30 03:47:06.251576 kubelet[2360]: I0430 03:47:06.251498 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/beadcbfe98d763a38c7be325fa9bca59-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-b-745f04f342\" (UID: \"beadcbfe98d763a38c7be325fa9bca59\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.251757 kubelet[2360]: I0430 03:47:06.251591 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/beadcbfe98d763a38c7be325fa9bca59-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-b-745f04f342\" (UID: \"beadcbfe98d763a38c7be325fa9bca59\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.251757 kubelet[2360]: I0430 03:47:06.251635 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.251757 kubelet[2360]: I0430 03:47:06.251706 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.251757 kubelet[2360]: I0430 03:47:06.251740 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/064849afdade165c4c94a943f6accc0b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-b-745f04f342\" (UID: \"064849afdade165c4c94a943f6accc0b\") " pod="kube-system/kube-scheduler-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.251858 kubelet[2360]: I0430 03:47:06.251780 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/beadcbfe98d763a38c7be325fa9bca59-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-b-745f04f342\" (UID: \"beadcbfe98d763a38c7be325fa9bca59\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.251858 kubelet[2360]: I0430 03:47:06.251806 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.251858 kubelet[2360]: I0430 03:47:06.251834 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.251918 kubelet[2360]: I0430 03:47:06.251863 2360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.336417 kubelet[2360]: I0430 03:47:06.336320 2360 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.336876 kubelet[2360]: E0430 03:47:06.336832 2360 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://157.180.64.98:6443/api/v1/nodes\": dial tcp 157.180.64.98:6443: connect: connection refused" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.411495 containerd[1500]: time="2025-04-30T03:47:06.411328523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-b-745f04f342,Uid:beadcbfe98d763a38c7be325fa9bca59,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:06.431756 containerd[1500]: time="2025-04-30T03:47:06.431652034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-b-745f04f342,Uid:064849afdade165c4c94a943f6accc0b,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:06.432143 containerd[1500]: time="2025-04-30T03:47:06.431653807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-b-745f04f342,Uid:74621422a0db89f7c5052da44cce62cf,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:06.559160 kubelet[2360]: E0430 03:47:06.559078 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.64.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-b-745f04f342?timeout=10s\": dial tcp 157.180.64.98:6443: connect: connection refused" interval="800ms" Apr 30 03:47:06.740611 kubelet[2360]: I0430 03:47:06.740488 2360 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.741729 kubelet[2360]: E0430 03:47:06.740998 2360 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://157.180.64.98:6443/api/v1/nodes\": dial tcp 157.180.64.98:6443: connect: connection refused" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:06.912744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3540846559.mount: Deactivated successfully. Apr 30 03:47:06.920033 containerd[1500]: time="2025-04-30T03:47:06.919907135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:47:06.924198 containerd[1500]: time="2025-04-30T03:47:06.924114351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 30 03:47:06.925637 containerd[1500]: time="2025-04-30T03:47:06.925587080Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:47:06.927458 containerd[1500]: time="2025-04-30T03:47:06.927380353Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:47:06.929605 containerd[1500]: time="2025-04-30T03:47:06.929421347Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:47:06.929605 containerd[1500]: time="2025-04-30T03:47:06.929571479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:47:06.930712 containerd[1500]: time="2025-04-30T03:47:06.930689754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:47:06.938431 containerd[1500]: time="2025-04-30T03:47:06.937242606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:47:06.938431 containerd[1500]: time="2025-04-30T03:47:06.938034069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.555946ms" Apr 30 03:47:06.940998 containerd[1500]: time="2025-04-30T03:47:06.940197975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.179063ms" Apr 30 03:47:06.942547 containerd[1500]: time="2025-04-30T03:47:06.942273125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.395317ms" Apr 30 03:47:07.031646 kubelet[2360]: W0430 03:47:07.028470 2360 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.64.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 157.180.64.98:6443: connect: connection refused Apr 30 03:47:07.031646 kubelet[2360]: E0430 03:47:07.028560 2360 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://157.180.64.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:07.098250 kubelet[2360]: W0430 03:47:07.097005 2360 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.64.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-b-745f04f342&limit=500&resourceVersion=0": dial tcp 157.180.64.98:6443: connect: connection refused Apr 30 03:47:07.098250 kubelet[2360]: E0430 03:47:07.097109 2360 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://157.180.64.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-b-745f04f342&limit=500&resourceVersion=0\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:07.111166 containerd[1500]: time="2025-04-30T03:47:07.111036081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:07.111472 containerd[1500]: time="2025-04-30T03:47:07.111178347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:07.111472 containerd[1500]: time="2025-04-30T03:47:07.111213754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:07.111472 containerd[1500]: time="2025-04-30T03:47:07.111315565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:07.112059 containerd[1500]: time="2025-04-30T03:47:07.111914727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:07.112059 containerd[1500]: time="2025-04-30T03:47:07.111979219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:07.112059 containerd[1500]: time="2025-04-30T03:47:07.112002993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:07.112890 containerd[1500]: time="2025-04-30T03:47:07.112663350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:07.113916 containerd[1500]: time="2025-04-30T03:47:07.113854854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:07.114056 containerd[1500]: time="2025-04-30T03:47:07.114015255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:07.114161 containerd[1500]: time="2025-04-30T03:47:07.114125842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:07.115723 containerd[1500]: time="2025-04-30T03:47:07.115369894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:07.144904 systemd[1]: Started cri-containerd-93a3255949a8c8e4494bfc6a9258af9385156f475e77330fe09233388a5c70f2.scope - libcontainer container 93a3255949a8c8e4494bfc6a9258af9385156f475e77330fe09233388a5c70f2. Apr 30 03:47:07.151234 systemd[1]: Started cri-containerd-2110d247bf1884384cf120ff6c4cf6d197683101621abc7cfb0d9d73d2f95511.scope - libcontainer container 2110d247bf1884384cf120ff6c4cf6d197683101621abc7cfb0d9d73d2f95511. Apr 30 03:47:07.152968 systemd[1]: Started cri-containerd-ec0f94c9b3cfcff5340cf56753ad0a9c33aee75ad4c690bebcd764cb49438f72.scope - libcontainer container ec0f94c9b3cfcff5340cf56753ad0a9c33aee75ad4c690bebcd764cb49438f72. Apr 30 03:47:07.162181 kubelet[2360]: W0430 03:47:07.161971 2360 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.64.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.64.98:6443: connect: connection refused Apr 30 03:47:07.162181 kubelet[2360]: E0430 03:47:07.162052 2360 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://157.180.64.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:07.207851 containerd[1500]: time="2025-04-30T03:47:07.206910155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-b-745f04f342,Uid:064849afdade165c4c94a943f6accc0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"93a3255949a8c8e4494bfc6a9258af9385156f475e77330fe09233388a5c70f2\"" Apr 30 03:47:07.213567 containerd[1500]: time="2025-04-30T03:47:07.213398185Z" level=info msg="CreateContainer within sandbox \"93a3255949a8c8e4494bfc6a9258af9385156f475e77330fe09233388a5c70f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:47:07.238151 containerd[1500]: time="2025-04-30T03:47:07.238107177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-b-745f04f342,Uid:74621422a0db89f7c5052da44cce62cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec0f94c9b3cfcff5340cf56753ad0a9c33aee75ad4c690bebcd764cb49438f72\"" Apr 30 03:47:07.242448 containerd[1500]: time="2025-04-30T03:47:07.242282452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-b-745f04f342,Uid:beadcbfe98d763a38c7be325fa9bca59,Namespace:kube-system,Attempt:0,} returns sandbox id \"2110d247bf1884384cf120ff6c4cf6d197683101621abc7cfb0d9d73d2f95511\"" Apr 30 03:47:07.244517 containerd[1500]: time="2025-04-30T03:47:07.244477486Z" level=info msg="CreateContainer within sandbox \"ec0f94c9b3cfcff5340cf56753ad0a9c33aee75ad4c690bebcd764cb49438f72\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:47:07.245972 containerd[1500]: time="2025-04-30T03:47:07.245858584Z" level=info msg="CreateContainer within sandbox \"2110d247bf1884384cf120ff6c4cf6d197683101621abc7cfb0d9d73d2f95511\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:47:07.254737 containerd[1500]: time="2025-04-30T03:47:07.254693203Z" level=info msg="CreateContainer within sandbox \"93a3255949a8c8e4494bfc6a9258af9385156f475e77330fe09233388a5c70f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026\"" Apr 30 03:47:07.256901 containerd[1500]: time="2025-04-30T03:47:07.255551080Z" level=info msg="StartContainer for \"07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026\"" Apr 30 03:47:07.264980 containerd[1500]: time="2025-04-30T03:47:07.264938686Z" level=info msg="CreateContainer within sandbox \"ec0f94c9b3cfcff5340cf56753ad0a9c33aee75ad4c690bebcd764cb49438f72\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf\"" Apr 30 03:47:07.265639 containerd[1500]: time="2025-04-30T03:47:07.265603212Z" level=info msg="StartContainer for \"6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf\"" Apr 30 03:47:07.269136 containerd[1500]: time="2025-04-30T03:47:07.269091710Z" level=info msg="CreateContainer within sandbox \"2110d247bf1884384cf120ff6c4cf6d197683101621abc7cfb0d9d73d2f95511\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"125913fe30bbb440fa39f6349b010497d8f59f76a88b737bb309e5644317ce0f\"" Apr 30 03:47:07.269842 containerd[1500]: time="2025-04-30T03:47:07.269823101Z" level=info msg="StartContainer for \"125913fe30bbb440fa39f6349b010497d8f59f76a88b737bb309e5644317ce0f\"" Apr 30 03:47:07.292283 systemd[1]: Started cri-containerd-07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026.scope - libcontainer container 07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026. Apr 30 03:47:07.302860 systemd[1]: Started cri-containerd-125913fe30bbb440fa39f6349b010497d8f59f76a88b737bb309e5644317ce0f.scope - libcontainer container 125913fe30bbb440fa39f6349b010497d8f59f76a88b737bb309e5644317ce0f. Apr 30 03:47:07.308043 systemd[1]: Started cri-containerd-6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf.scope - libcontainer container 6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf. Apr 30 03:47:07.341961 containerd[1500]: time="2025-04-30T03:47:07.341912977Z" level=info msg="StartContainer for \"07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026\" returns successfully" Apr 30 03:47:07.360768 kubelet[2360]: E0430 03:47:07.360687 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.64.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-b-745f04f342?timeout=10s\": dial tcp 157.180.64.98:6443: connect: connection refused" interval="1.6s" Apr 30 03:47:07.384347 containerd[1500]: time="2025-04-30T03:47:07.384307896Z" level=info msg="StartContainer for \"6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf\" returns successfully" Apr 30 03:47:07.390693 containerd[1500]: time="2025-04-30T03:47:07.388966768Z" level=info msg="StartContainer for \"125913fe30bbb440fa39f6349b010497d8f59f76a88b737bb309e5644317ce0f\" returns successfully" Apr 30 03:47:07.449425 kubelet[2360]: W0430 03:47:07.449333 2360 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.64.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.64.98:6443: connect: connection refused Apr 30 03:47:07.449635 kubelet[2360]: E0430 03:47:07.449438 2360 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://157.180.64.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.64.98:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:47:07.544570 kubelet[2360]: I0430 03:47:07.544466 2360 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:07.545270 kubelet[2360]: E0430 03:47:07.545245 2360 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://157.180.64.98:6443/api/v1/nodes\": dial tcp 157.180.64.98:6443: connect: connection refused" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:08.021702 kubelet[2360]: E0430 03:47:08.021662 2360 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:08.024134 kubelet[2360]: E0430 03:47:08.024118 2360 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:08.026495 kubelet[2360]: E0430 03:47:08.026481 2360 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.030388 kubelet[2360]: E0430 03:47:09.029028 2360 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.030388 kubelet[2360]: E0430 03:47:09.029233 2360 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.149882 kubelet[2360]: I0430 03:47:09.149534 2360 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.302018 kubelet[2360]: E0430 03:47:09.301531 2360 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-b-745f04f342\" not found" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.340905 kubelet[2360]: I0430 03:47:09.340715 2360 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.340905 kubelet[2360]: E0430 03:47:09.340750 2360 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-b-745f04f342\": node \"ci-4081-3-3-b-745f04f342\" not found" Apr 30 03:47:09.357196 kubelet[2360]: E0430 03:47:09.357155 2360 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-745f04f342\" not found" Apr 30 03:47:09.456480 kubelet[2360]: I0430 03:47:09.456363 2360 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.463762 kubelet[2360]: E0430 03:47:09.463660 2360 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-b-745f04f342\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.463762 kubelet[2360]: I0430 03:47:09.463755 2360 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.465898 kubelet[2360]: E0430 03:47:09.465862 2360 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.465898 kubelet[2360]: I0430 03:47:09.465888 2360 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.467196 kubelet[2360]: E0430 03:47:09.467150 2360 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-b-745f04f342\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.502090 kubelet[2360]: I0430 03:47:09.502050 2360 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.504097 kubelet[2360]: E0430 03:47:09.504060 2360 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:09.926901 kubelet[2360]: I0430 03:47:09.926807 2360 apiserver.go:52] "Watching apiserver" Apr 30 03:47:09.951121 kubelet[2360]: I0430 03:47:09.951025 2360 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:47:11.561476 systemd[1]: Reloading requested from client PID 2632 ('systemctl') (unit session-7.scope)... Apr 30 03:47:11.561501 systemd[1]: Reloading... Apr 30 03:47:11.710704 zram_generator::config[2672]: No configuration found. Apr 30 03:47:11.861290 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:47:11.957830 systemd[1]: Reloading finished in 395 ms. Apr 30 03:47:11.994317 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:47:12.007313 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:47:12.007563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:47:12.015146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:47:12.152964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:47:12.159809 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:47:12.231081 kubelet[2723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:47:12.231081 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:47:12.231081 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:47:12.231552 kubelet[2723]: I0430 03:47:12.231131 2723 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:47:12.238330 kubelet[2723]: I0430 03:47:12.238286 2723 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:47:12.238330 kubelet[2723]: I0430 03:47:12.238315 2723 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:47:12.238617 kubelet[2723]: I0430 03:47:12.238592 2723 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:47:12.241212 kubelet[2723]: I0430 03:47:12.241197 2723 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:47:12.247464 kubelet[2723]: I0430 03:47:12.246128 2723 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:47:12.253409 kubelet[2723]: E0430 03:47:12.253275 2723 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:47:12.253409 kubelet[2723]: I0430 03:47:12.253310 2723 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:47:12.256974 kubelet[2723]: I0430 03:47:12.256727 2723 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:47:12.257943 kubelet[2723]: I0430 03:47:12.257918 2723 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:47:12.258150 kubelet[2723]: I0430 03:47:12.257995 2723 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-b-745f04f342","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:47:12.258259 kubelet[2723]: I0430 03:47:12.258252 2723 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:47:12.258299 kubelet[2723]: I0430 03:47:12.258295 2723 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:47:12.261207 kubelet[2723]: I0430 03:47:12.261198 2723 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:47:12.261425 kubelet[2723]: I0430 03:47:12.261414 2723 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:47:12.261567 kubelet[2723]: I0430 03:47:12.261487 2723 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:47:12.261567 kubelet[2723]: I0430 03:47:12.261512 2723 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:47:12.261567 kubelet[2723]: I0430 03:47:12.261522 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:47:12.264691 kubelet[2723]: I0430 03:47:12.263753 2723 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:47:12.268829 kubelet[2723]: I0430 03:47:12.268396 2723 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:47:12.274818 kubelet[2723]: I0430 03:47:12.273797 2723 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:47:12.274818 kubelet[2723]: I0430 03:47:12.273833 2723 server.go:1287] "Started kubelet" Apr 30 03:47:12.277914 kubelet[2723]: I0430 03:47:12.277891 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:47:12.287698 kubelet[2723]: I0430 03:47:12.286209 2723 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:47:12.287698 kubelet[2723]: I0430 03:47:12.287574 2723 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:47:12.289074 kubelet[2723]: I0430 03:47:12.288803 2723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:47:12.289074 kubelet[2723]: I0430 03:47:12.289020 2723 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:47:12.289242 kubelet[2723]: I0430 03:47:12.289213 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:47:12.291319 kubelet[2723]: I0430 03:47:12.291296 2723 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:47:12.291689 kubelet[2723]: E0430 03:47:12.291554 2723 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-745f04f342\" not found" Apr 30 03:47:12.297915 kubelet[2723]: I0430 03:47:12.292661 2723 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:47:12.297915 kubelet[2723]: I0430 03:47:12.293458 2723 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:47:12.297915 kubelet[2723]: I0430 03:47:12.295045 2723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:47:12.299279 kubelet[2723]: I0430 03:47:12.295309 2723 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:47:12.299279 kubelet[2723]: I0430 03:47:12.298070 2723 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:47:12.299279 kubelet[2723]: E0430 03:47:12.296920 2723 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:47:12.301838 kubelet[2723]: I0430 03:47:12.301803 2723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:47:12.301838 kubelet[2723]: I0430 03:47:12.301838 2723 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:47:12.301966 kubelet[2723]: I0430 03:47:12.301863 2723 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:47:12.301966 kubelet[2723]: I0430 03:47:12.301871 2723 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:47:12.301966 kubelet[2723]: E0430 03:47:12.301921 2723 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:47:12.316699 kubelet[2723]: I0430 03:47:12.314629 2723 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:47:12.366286 kubelet[2723]: I0430 03:47:12.366256 2723 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:47:12.366286 kubelet[2723]: I0430 03:47:12.366275 2723 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:47:12.366286 kubelet[2723]: I0430 03:47:12.366293 2723 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:47:12.366539 kubelet[2723]: I0430 03:47:12.366495 2723 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:47:12.366539 kubelet[2723]: I0430 03:47:12.366510 2723 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:47:12.366539 kubelet[2723]: I0430 03:47:12.366532 2723 policy_none.go:49] "None policy: Start" Apr 30 03:47:12.366693 kubelet[2723]: I0430 03:47:12.366542 2723 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:47:12.366693 kubelet[2723]: I0430 03:47:12.366554 2723 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:47:12.366693 kubelet[2723]: I0430 03:47:12.366693 2723 state_mem.go:75] "Updated machine memory state" Apr 30 03:47:12.370578 kubelet[2723]: I0430 03:47:12.370554 2723 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:47:12.370908 kubelet[2723]: I0430 03:47:12.370773 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:47:12.370908 kubelet[2723]: I0430 03:47:12.370787 2723 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:47:12.371218 kubelet[2723]: I0430 03:47:12.371197 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:47:12.372694 kubelet[2723]: E0430 03:47:12.372661 2723 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:47:12.403862 kubelet[2723]: I0430 03:47:12.403349 2723 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.404842 kubelet[2723]: I0430 03:47:12.404829 2723 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.405186 kubelet[2723]: I0430 03:47:12.405048 2723 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.479247 kubelet[2723]: I0430 03:47:12.479193 2723 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.487614 kubelet[2723]: I0430 03:47:12.487088 2723 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.487614 kubelet[2723]: I0430 03:47:12.487193 2723 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.499736 kubelet[2723]: I0430 03:47:12.499691 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/beadcbfe98d763a38c7be325fa9bca59-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-b-745f04f342\" (UID: \"beadcbfe98d763a38c7be325fa9bca59\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.499736 kubelet[2723]: I0430 03:47:12.499730 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/beadcbfe98d763a38c7be325fa9bca59-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-b-745f04f342\" (UID: \"beadcbfe98d763a38c7be325fa9bca59\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.499902 kubelet[2723]: I0430 03:47:12.499756 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.499902 kubelet[2723]: I0430 03:47:12.499775 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.499902 kubelet[2723]: I0430 03:47:12.499795 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.499902 kubelet[2723]: I0430 03:47:12.499813 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/beadcbfe98d763a38c7be325fa9bca59-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-b-745f04f342\" (UID: \"beadcbfe98d763a38c7be325fa9bca59\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.499902 kubelet[2723]: I0430 03:47:12.499833 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.500269 kubelet[2723]: I0430 03:47:12.499852 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74621422a0db89f7c5052da44cce62cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-b-745f04f342\" (UID: \"74621422a0db89f7c5052da44cce62cf\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" Apr 30 03:47:12.500269 kubelet[2723]: I0430 03:47:12.499874 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/064849afdade165c4c94a943f6accc0b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-b-745f04f342\" (UID: \"064849afdade165c4c94a943f6accc0b\") " pod="kube-system/kube-scheduler-ci-4081-3-3-b-745f04f342" Apr 30 03:47:13.274102 kubelet[2723]: I0430 03:47:13.273805 2723 apiserver.go:52] "Watching apiserver" Apr 30 03:47:13.299200 kubelet[2723]: I0430 03:47:13.299108 2723 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:47:13.346707 kubelet[2723]: I0430 03:47:13.344401 2723 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:13.358681 kubelet[2723]: E0430 03:47:13.358619 2723 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-b-745f04f342\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" Apr 30 03:47:13.432526 kubelet[2723]: I0430 03:47:13.432417 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-b-745f04f342" podStartSLOduration=1.432344911 podStartE2EDuration="1.432344911s" podCreationTimestamp="2025-04-30 03:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:13.416400975 +0000 UTC m=+1.250637572" watchObservedRunningTime="2025-04-30 03:47:13.432344911 +0000 UTC m=+1.266581518" Apr 30 03:47:13.451114 kubelet[2723]: I0430 03:47:13.451044 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-b-745f04f342" podStartSLOduration=1.451024282 podStartE2EDuration="1.451024282s" podCreationTimestamp="2025-04-30 03:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:13.433488685 +0000 UTC m=+1.267725292" watchObservedRunningTime="2025-04-30 03:47:13.451024282 +0000 UTC m=+1.285260880" Apr 30 03:47:13.481301 kubelet[2723]: I0430 03:47:13.481147 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-b-745f04f342" podStartSLOduration=1.481126466 podStartE2EDuration="1.481126466s" podCreationTimestamp="2025-04-30 03:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:13.451557883 +0000 UTC m=+1.285794500" watchObservedRunningTime="2025-04-30 03:47:13.481126466 +0000 UTC m=+1.315363074" Apr 30 03:47:16.366568 kubelet[2723]: I0430 03:47:16.366488 2723 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:47:16.368219 kubelet[2723]: I0430 03:47:16.366985 2723 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:47:16.368281 containerd[1500]: time="2025-04-30T03:47:16.366840956Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:47:17.288045 systemd[1]: Created slice kubepods-besteffort-pod09e11abe_cb69_4e93_9f36_72481da17fe6.slice - libcontainer container kubepods-besteffort-pod09e11abe_cb69_4e93_9f36_72481da17fe6.slice. Apr 30 03:47:17.429349 kubelet[2723]: I0430 03:47:17.429305 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09e11abe-cb69-4e93-9f36-72481da17fe6-kube-proxy\") pod \"kube-proxy-nrc98\" (UID: \"09e11abe-cb69-4e93-9f36-72481da17fe6\") " pod="kube-system/kube-proxy-nrc98" Apr 30 03:47:17.429349 kubelet[2723]: I0430 03:47:17.429358 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7g5k\" (UniqueName: \"kubernetes.io/projected/09e11abe-cb69-4e93-9f36-72481da17fe6-kube-api-access-n7g5k\") pod \"kube-proxy-nrc98\" (UID: \"09e11abe-cb69-4e93-9f36-72481da17fe6\") " pod="kube-system/kube-proxy-nrc98" Apr 30 03:47:17.429899 kubelet[2723]: I0430 03:47:17.429401 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09e11abe-cb69-4e93-9f36-72481da17fe6-xtables-lock\") pod \"kube-proxy-nrc98\" (UID: \"09e11abe-cb69-4e93-9f36-72481da17fe6\") " pod="kube-system/kube-proxy-nrc98" Apr 30 03:47:17.429899 kubelet[2723]: I0430 03:47:17.429423 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09e11abe-cb69-4e93-9f36-72481da17fe6-lib-modules\") pod \"kube-proxy-nrc98\" (UID: \"09e11abe-cb69-4e93-9f36-72481da17fe6\") " pod="kube-system/kube-proxy-nrc98" Apr 30 03:47:17.457707 kubelet[2723]: W0430 03:47:17.454262 2723 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-3-3-b-745f04f342" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-3-b-745f04f342' and this object Apr 30 03:47:17.457707 kubelet[2723]: E0430 03:47:17.454314 2723 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081-3-3-b-745f04f342\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-3-b-745f04f342' and this object" logger="UnhandledError" Apr 30 03:47:17.457707 kubelet[2723]: W0430 03:47:17.454420 2723 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-3-b-745f04f342" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-3-b-745f04f342' and this object Apr 30 03:47:17.457707 kubelet[2723]: E0430 03:47:17.454435 2723 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-3-b-745f04f342\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-3-b-745f04f342' and this object" logger="UnhandledError" Apr 30 03:47:17.455338 systemd[1]: Created slice kubepods-besteffort-pod6163aa22_b72a_46b0_9f94_233938dec868.slice - libcontainer container kubepods-besteffort-pod6163aa22_b72a_46b0_9f94_233938dec868.slice. Apr 30 03:47:17.458015 kubelet[2723]: I0430 03:47:17.454500 2723 status_manager.go:890] "Failed to get status for pod" podUID="6163aa22-b72a-46b0-9f94-233938dec868" pod="tigera-operator/tigera-operator-789496d6f5-6tddm" err="pods \"tigera-operator-789496d6f5-6tddm\" is forbidden: User \"system:node:ci-4081-3-3-b-745f04f342\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-3-b-745f04f342' and this object" Apr 30 03:47:17.605135 containerd[1500]: time="2025-04-30T03:47:17.604961982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrc98,Uid:09e11abe-cb69-4e93-9f36-72481da17fe6,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:17.633734 kubelet[2723]: I0430 03:47:17.630773 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8cpx\" (UniqueName: \"kubernetes.io/projected/6163aa22-b72a-46b0-9f94-233938dec868-kube-api-access-m8cpx\") pod \"tigera-operator-789496d6f5-6tddm\" (UID: \"6163aa22-b72a-46b0-9f94-233938dec868\") " pod="tigera-operator/tigera-operator-789496d6f5-6tddm" Apr 30 03:47:17.633734 kubelet[2723]: I0430 03:47:17.630844 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6163aa22-b72a-46b0-9f94-233938dec868-var-lib-calico\") pod \"tigera-operator-789496d6f5-6tddm\" (UID: \"6163aa22-b72a-46b0-9f94-233938dec868\") " pod="tigera-operator/tigera-operator-789496d6f5-6tddm" Apr 30 03:47:17.654031 containerd[1500]: time="2025-04-30T03:47:17.653865861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:17.654466 containerd[1500]: time="2025-04-30T03:47:17.654378441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:17.655184 containerd[1500]: time="2025-04-30T03:47:17.655032046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:17.655500 containerd[1500]: time="2025-04-30T03:47:17.655415173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:17.700219 systemd[1]: Started cri-containerd-ffa0c54560dce561d7ad9fac8f68a14cfbe7d721e326f07ce783f0b3f5fc186a.scope - libcontainer container ffa0c54560dce561d7ad9fac8f68a14cfbe7d721e326f07ce783f0b3f5fc186a. Apr 30 03:47:17.760593 containerd[1500]: time="2025-04-30T03:47:17.760527783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrc98,Uid:09e11abe-cb69-4e93-9f36-72481da17fe6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffa0c54560dce561d7ad9fac8f68a14cfbe7d721e326f07ce783f0b3f5fc186a\"" Apr 30 03:47:17.769126 containerd[1500]: time="2025-04-30T03:47:17.769065737Z" level=info msg="CreateContainer within sandbox \"ffa0c54560dce561d7ad9fac8f68a14cfbe7d721e326f07ce783f0b3f5fc186a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:47:17.793388 containerd[1500]: time="2025-04-30T03:47:17.793333806Z" level=info msg="CreateContainer within sandbox \"ffa0c54560dce561d7ad9fac8f68a14cfbe7d721e326f07ce783f0b3f5fc186a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"395c7eaebf7ec8da6a28257a66f7d2358cdf038832e760a9ef1ca39c5fd54666\"" Apr 30 03:47:17.794879 containerd[1500]: time="2025-04-30T03:47:17.794856209Z" level=info msg="StartContainer for \"395c7eaebf7ec8da6a28257a66f7d2358cdf038832e760a9ef1ca39c5fd54666\"" Apr 30 03:47:17.825846 systemd[1]: Started cri-containerd-395c7eaebf7ec8da6a28257a66f7d2358cdf038832e760a9ef1ca39c5fd54666.scope - libcontainer container 395c7eaebf7ec8da6a28257a66f7d2358cdf038832e760a9ef1ca39c5fd54666. Apr 30 03:47:17.864609 containerd[1500]: time="2025-04-30T03:47:17.864427728Z" level=info msg="StartContainer for \"395c7eaebf7ec8da6a28257a66f7d2358cdf038832e760a9ef1ca39c5fd54666\" returns successfully" Apr 30 03:47:17.912131 sudo[1872]: pam_unix(sudo:session): session closed for user root Apr 30 03:47:18.071474 sshd[1866]: pam_unix(sshd:session): session closed for user core Apr 30 03:47:18.078453 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:47:18.080296 systemd[1]: sshd@6-157.180.64.98:22-139.178.68.195:56012.service: Deactivated successfully. Apr 30 03:47:18.084633 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:47:18.085316 systemd[1]: session-7.scope: Consumed 4.772s CPU time, 141.0M memory peak, 0B memory swap peak. Apr 30 03:47:18.090212 systemd-logind[1485]: Removed session 7. Apr 30 03:47:18.390323 kubelet[2723]: I0430 03:47:18.390199 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nrc98" podStartSLOduration=1.39016362 podStartE2EDuration="1.39016362s" podCreationTimestamp="2025-04-30 03:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:18.376791334 +0000 UTC m=+6.211028011" watchObservedRunningTime="2025-04-30 03:47:18.39016362 +0000 UTC m=+6.224400267" Apr 30 03:47:18.547051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383857910.mount: Deactivated successfully. Apr 30 03:47:18.962822 containerd[1500]: time="2025-04-30T03:47:18.962719455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-6tddm,Uid:6163aa22-b72a-46b0-9f94-233938dec868,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:47:19.020729 containerd[1500]: time="2025-04-30T03:47:19.019935695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:19.020729 containerd[1500]: time="2025-04-30T03:47:19.020041553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:19.020729 containerd[1500]: time="2025-04-30T03:47:19.020131722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:19.020729 containerd[1500]: time="2025-04-30T03:47:19.020403503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:19.064124 systemd[1]: Started cri-containerd-aba9bdf83e8a30a5702d0215d49d8543f638b85e381b4efd51ed7f883d611611.scope - libcontainer container aba9bdf83e8a30a5702d0215d49d8543f638b85e381b4efd51ed7f883d611611. Apr 30 03:47:19.126835 containerd[1500]: time="2025-04-30T03:47:19.126645130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-6tddm,Uid:6163aa22-b72a-46b0-9f94-233938dec868,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aba9bdf83e8a30a5702d0215d49d8543f638b85e381b4efd51ed7f883d611611\"" Apr 30 03:47:19.129060 containerd[1500]: time="2025-04-30T03:47:19.128972694Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:47:21.460946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055008995.mount: Deactivated successfully. Apr 30 03:47:21.827132 containerd[1500]: time="2025-04-30T03:47:21.826104911Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:21.827505 containerd[1500]: time="2025-04-30T03:47:21.827251239Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:47:21.828178 containerd[1500]: time="2025-04-30T03:47:21.828147439Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:21.831044 containerd[1500]: time="2025-04-30T03:47:21.831016066Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:21.831701 containerd[1500]: time="2025-04-30T03:47:21.831662929Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.702656964s" Apr 30 03:47:21.831775 containerd[1500]: time="2025-04-30T03:47:21.831764349Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:47:21.838933 containerd[1500]: time="2025-04-30T03:47:21.838894144Z" level=info msg="CreateContainer within sandbox \"aba9bdf83e8a30a5702d0215d49d8543f638b85e381b4efd51ed7f883d611611\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:47:21.852464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629313291.mount: Deactivated successfully. Apr 30 03:47:21.853381 containerd[1500]: time="2025-04-30T03:47:21.853196524Z" level=info msg="CreateContainer within sandbox \"aba9bdf83e8a30a5702d0215d49d8543f638b85e381b4efd51ed7f883d611611\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74\"" Apr 30 03:47:21.854210 containerd[1500]: time="2025-04-30T03:47:21.854136656Z" level=info msg="StartContainer for \"9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74\"" Apr 30 03:47:21.889851 systemd[1]: Started cri-containerd-9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74.scope - libcontainer container 9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74. Apr 30 03:47:21.914501 containerd[1500]: time="2025-04-30T03:47:21.914445575Z" level=info msg="StartContainer for \"9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74\" returns successfully" Apr 30 03:47:22.463028 kubelet[2723]: I0430 03:47:22.462929 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-6tddm" podStartSLOduration=2.753929334 podStartE2EDuration="5.462906005s" podCreationTimestamp="2025-04-30 03:47:17 +0000 UTC" firstStartedPulling="2025-04-30 03:47:19.128180297 +0000 UTC m=+6.962416914" lastFinishedPulling="2025-04-30 03:47:21.837156977 +0000 UTC m=+9.671393585" observedRunningTime="2025-04-30 03:47:22.455097228 +0000 UTC m=+10.289333875" watchObservedRunningTime="2025-04-30 03:47:22.462906005 +0000 UTC m=+10.297142632" Apr 30 03:47:25.171162 systemd[1]: Created slice kubepods-besteffort-pod351204da_93b2_4dc2_8f24_de361a69a92b.slice - libcontainer container kubepods-besteffort-pod351204da_93b2_4dc2_8f24_de361a69a92b.slice. Apr 30 03:47:25.178530 systemd[1]: Created slice kubepods-besteffort-pod64740bc6_ef82_440a_9986_14e9a7878d23.slice - libcontainer container kubepods-besteffort-pod64740bc6_ef82_440a_9986_14e9a7878d23.slice. Apr 30 03:47:25.181840 kubelet[2723]: I0430 03:47:25.181821 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-cni-bin-dir\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182530 kubelet[2723]: I0430 03:47:25.182172 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-flexvol-driver-host\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182530 kubelet[2723]: I0430 03:47:25.182193 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-var-lib-calico\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182530 kubelet[2723]: I0430 03:47:25.182209 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdvv\" (UniqueName: \"kubernetes.io/projected/351204da-93b2-4dc2-8f24-de361a69a92b-kube-api-access-lpdvv\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182530 kubelet[2723]: I0430 03:47:25.182223 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-policysync\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182530 kubelet[2723]: I0430 03:47:25.182236 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/64740bc6-ef82-440a-9986-14e9a7878d23-typha-certs\") pod \"calico-typha-86f46594fd-dcpgz\" (UID: \"64740bc6-ef82-440a-9986-14e9a7878d23\") " pod="calico-system/calico-typha-86f46594fd-dcpgz" Apr 30 03:47:25.182766 kubelet[2723]: I0430 03:47:25.182250 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-var-run-calico\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182766 kubelet[2723]: I0430 03:47:25.182265 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64740bc6-ef82-440a-9986-14e9a7878d23-tigera-ca-bundle\") pod \"calico-typha-86f46594fd-dcpgz\" (UID: \"64740bc6-ef82-440a-9986-14e9a7878d23\") " pod="calico-system/calico-typha-86f46594fd-dcpgz" Apr 30 03:47:25.182766 kubelet[2723]: I0430 03:47:25.182278 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-cni-net-dir\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182766 kubelet[2723]: I0430 03:47:25.182294 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ms77\" (UniqueName: \"kubernetes.io/projected/64740bc6-ef82-440a-9986-14e9a7878d23-kube-api-access-2ms77\") pod \"calico-typha-86f46594fd-dcpgz\" (UID: \"64740bc6-ef82-440a-9986-14e9a7878d23\") " pod="calico-system/calico-typha-86f46594fd-dcpgz" Apr 30 03:47:25.182766 kubelet[2723]: I0430 03:47:25.182309 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-lib-modules\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182866 kubelet[2723]: I0430 03:47:25.182321 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-xtables-lock\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182866 kubelet[2723]: I0430 03:47:25.182333 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/351204da-93b2-4dc2-8f24-de361a69a92b-cni-log-dir\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182866 kubelet[2723]: I0430 03:47:25.182357 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/351204da-93b2-4dc2-8f24-de361a69a92b-node-certs\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.182866 kubelet[2723]: I0430 03:47:25.182371 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/351204da-93b2-4dc2-8f24-de361a69a92b-tigera-ca-bundle\") pod \"calico-node-hjz68\" (UID: \"351204da-93b2-4dc2-8f24-de361a69a92b\") " pod="calico-system/calico-node-hjz68" Apr 30 03:47:25.275366 kubelet[2723]: I0430 03:47:25.275295 2723 status_manager.go:890] "Failed to get status for pod" podUID="91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb" pod="calico-system/csi-node-driver-kshpv" err="pods \"csi-node-driver-kshpv\" is forbidden: User \"system:node:ci-4081-3-3-b-745f04f342\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-3-b-745f04f342' and this object" Apr 30 03:47:25.276689 kubelet[2723]: E0430 03:47:25.276561 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshpv" podUID="91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb" Apr 30 03:47:25.283597 kubelet[2723]: I0430 03:47:25.282938 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbgb8\" (UniqueName: \"kubernetes.io/projected/91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb-kube-api-access-rbgb8\") pod \"csi-node-driver-kshpv\" (UID: \"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb\") " pod="calico-system/csi-node-driver-kshpv" Apr 30 03:47:25.283597 kubelet[2723]: I0430 03:47:25.282977 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb-registration-dir\") pod \"csi-node-driver-kshpv\" (UID: \"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb\") " pod="calico-system/csi-node-driver-kshpv" Apr 30 03:47:25.283597 kubelet[2723]: I0430 03:47:25.283015 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb-kubelet-dir\") pod \"csi-node-driver-kshpv\" (UID: \"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb\") " pod="calico-system/csi-node-driver-kshpv" Apr 30 03:47:25.283597 kubelet[2723]: I0430 03:47:25.283055 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb-socket-dir\") pod \"csi-node-driver-kshpv\" (UID: \"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb\") " pod="calico-system/csi-node-driver-kshpv" Apr 30 03:47:25.283597 kubelet[2723]: I0430 03:47:25.283115 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb-varrun\") pod \"csi-node-driver-kshpv\" (UID: \"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb\") " pod="calico-system/csi-node-driver-kshpv" Apr 30 03:47:25.299002 kubelet[2723]: E0430 03:47:25.298960 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.299002 kubelet[2723]: W0430 03:47:25.298994 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.299191 kubelet[2723]: E0430 03:47:25.299025 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.306273 kubelet[2723]: E0430 03:47:25.301132 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.306273 kubelet[2723]: W0430 03:47:25.301145 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.306273 kubelet[2723]: E0430 03:47:25.303744 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.306273 kubelet[2723]: E0430 03:47:25.303778 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.306273 kubelet[2723]: W0430 03:47:25.303784 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.306273 kubelet[2723]: E0430 03:47:25.305727 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.306273 kubelet[2723]: W0430 03:47:25.305736 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.306273 kubelet[2723]: E0430 03:47:25.305882 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.306273 kubelet[2723]: W0430 03:47:25.305890 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.306273 kubelet[2723]: E0430 03:47:25.305898 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.306273 kubelet[2723]: E0430 03:47:25.306154 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.306696 kubelet[2723]: W0430 03:47:25.306163 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.306696 kubelet[2723]: E0430 03:47:25.306170 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.306696 kubelet[2723]: E0430 03:47:25.306646 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.306758 kubelet[2723]: E0430 03:47:25.306719 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.310694 kubelet[2723]: E0430 03:47:25.310639 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.310871 kubelet[2723]: W0430 03:47:25.310857 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.310995 kubelet[2723]: E0430 03:47:25.310935 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.319939 kubelet[2723]: E0430 03:47:25.319874 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.320308 kubelet[2723]: W0430 03:47:25.320213 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.320537 kubelet[2723]: E0430 03:47:25.320472 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.384061 kubelet[2723]: E0430 03:47:25.384024 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.384631 kubelet[2723]: W0430 03:47:25.384615 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.384744 kubelet[2723]: E0430 03:47:25.384734 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.385532 kubelet[2723]: E0430 03:47:25.385517 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.385610 kubelet[2723]: W0430 03:47:25.385602 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.385740 kubelet[2723]: E0430 03:47:25.385656 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.386142 kubelet[2723]: E0430 03:47:25.386134 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.386728 kubelet[2723]: W0430 03:47:25.386295 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.386728 kubelet[2723]: E0430 03:47:25.386312 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.386897 kubelet[2723]: E0430 03:47:25.386832 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.386897 kubelet[2723]: W0430 03:47:25.386841 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.386897 kubelet[2723]: E0430 03:47:25.386851 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.387119 kubelet[2723]: E0430 03:47:25.386973 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.387119 kubelet[2723]: W0430 03:47:25.386986 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.387119 kubelet[2723]: E0430 03:47:25.386993 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.387384 kubelet[2723]: E0430 03:47:25.387362 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.387549 kubelet[2723]: W0430 03:47:25.387449 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.387549 kubelet[2723]: E0430 03:47:25.387482 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.387975 kubelet[2723]: E0430 03:47:25.387956 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.387975 kubelet[2723]: W0430 03:47:25.387970 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.388077 kubelet[2723]: E0430 03:47:25.387984 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.388324 kubelet[2723]: E0430 03:47:25.388305 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.388324 kubelet[2723]: W0430 03:47:25.388319 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.388480 kubelet[2723]: E0430 03:47:25.388454 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.388923 kubelet[2723]: E0430 03:47:25.388905 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.388923 kubelet[2723]: W0430 03:47:25.388919 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.389332 kubelet[2723]: E0430 03:47:25.388982 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.389332 kubelet[2723]: E0430 03:47:25.389063 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.389332 kubelet[2723]: W0430 03:47:25.389070 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.389332 kubelet[2723]: E0430 03:47:25.389198 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.389332 kubelet[2723]: W0430 03:47:25.389205 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.390000 kubelet[2723]: E0430 03:47:25.389456 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.390000 kubelet[2723]: W0430 03:47:25.389467 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.390000 kubelet[2723]: E0430 03:47:25.389476 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.390000 kubelet[2723]: E0430 03:47:25.389699 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.390000 kubelet[2723]: W0430 03:47:25.389707 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.390000 kubelet[2723]: E0430 03:47:25.389715 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.390000 kubelet[2723]: E0430 03:47:25.389758 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.390000 kubelet[2723]: E0430 03:47:25.389907 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.390000 kubelet[2723]: W0430 03:47:25.389914 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.390000 kubelet[2723]: E0430 03:47:25.389921 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.390434 kubelet[2723]: E0430 03:47:25.390315 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.390434 kubelet[2723]: E0430 03:47:25.390339 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.390434 kubelet[2723]: W0430 03:47:25.390359 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.390434 kubelet[2723]: E0430 03:47:25.390378 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.390579 kubelet[2723]: E0430 03:47:25.390542 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.390579 kubelet[2723]: W0430 03:47:25.390550 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.390579 kubelet[2723]: E0430 03:47:25.390564 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.390781 kubelet[2723]: E0430 03:47:25.390763 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.390781 kubelet[2723]: W0430 03:47:25.390775 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.390843 kubelet[2723]: E0430 03:47:25.390788 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.391054 kubelet[2723]: E0430 03:47:25.390987 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.391054 kubelet[2723]: W0430 03:47:25.390997 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.391054 kubelet[2723]: E0430 03:47:25.391011 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.391186 kubelet[2723]: E0430 03:47:25.391165 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.391186 kubelet[2723]: W0430 03:47:25.391177 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.391240 kubelet[2723]: E0430 03:47:25.391190 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.391607 kubelet[2723]: E0430 03:47:25.391393 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.391607 kubelet[2723]: W0430 03:47:25.391401 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.391607 kubelet[2723]: E0430 03:47:25.391413 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.391706 kubelet[2723]: E0430 03:47:25.391620 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.391706 kubelet[2723]: W0430 03:47:25.391637 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.391750 kubelet[2723]: E0430 03:47:25.391722 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.391841 kubelet[2723]: E0430 03:47:25.391822 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.391841 kubelet[2723]: W0430 03:47:25.391834 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.391892 kubelet[2723]: E0430 03:47:25.391842 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.392015 kubelet[2723]: E0430 03:47:25.391996 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.392015 kubelet[2723]: W0430 03:47:25.392009 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.392068 kubelet[2723]: E0430 03:47:25.392016 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.392187 kubelet[2723]: E0430 03:47:25.392166 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.392187 kubelet[2723]: W0430 03:47:25.392180 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.392236 kubelet[2723]: E0430 03:47:25.392188 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.393835 kubelet[2723]: E0430 03:47:25.393812 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.393835 kubelet[2723]: W0430 03:47:25.393828 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.394460 kubelet[2723]: E0430 03:47:25.394435 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.403538 kubelet[2723]: E0430 03:47:25.403500 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:25.403538 kubelet[2723]: W0430 03:47:25.403522 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:25.403538 kubelet[2723]: E0430 03:47:25.403547 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:25.478761 containerd[1500]: time="2025-04-30T03:47:25.478556391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hjz68,Uid:351204da-93b2-4dc2-8f24-de361a69a92b,Namespace:calico-system,Attempt:0,}" Apr 30 03:47:25.488529 containerd[1500]: time="2025-04-30T03:47:25.488316056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86f46594fd-dcpgz,Uid:64740bc6-ef82-440a-9986-14e9a7878d23,Namespace:calico-system,Attempt:0,}" Apr 30 03:47:25.519775 containerd[1500]: time="2025-04-30T03:47:25.518408556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:25.519775 containerd[1500]: time="2025-04-30T03:47:25.518506820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:25.519775 containerd[1500]: time="2025-04-30T03:47:25.518521929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:25.519775 containerd[1500]: time="2025-04-30T03:47:25.518607409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:25.524020 containerd[1500]: time="2025-04-30T03:47:25.523575893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:25.524020 containerd[1500]: time="2025-04-30T03:47:25.523649351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:25.524020 containerd[1500]: time="2025-04-30T03:47:25.523664299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:25.524020 containerd[1500]: time="2025-04-30T03:47:25.523771049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:25.559814 systemd[1]: Started cri-containerd-342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670.scope - libcontainer container 342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670. Apr 30 03:47:25.560818 systemd[1]: Started cri-containerd-7edda1c7475125ac4bbf4a939c85ffbd22fa174d654ea6fbf40f74248ecd2850.scope - libcontainer container 7edda1c7475125ac4bbf4a939c85ffbd22fa174d654ea6fbf40f74248ecd2850. Apr 30 03:47:25.619211 containerd[1500]: time="2025-04-30T03:47:25.619160822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hjz68,Uid:351204da-93b2-4dc2-8f24-de361a69a92b,Namespace:calico-system,Attempt:0,} returns sandbox id \"342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670\"" Apr 30 03:47:25.626969 containerd[1500]: time="2025-04-30T03:47:25.626739508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:47:25.643374 containerd[1500]: time="2025-04-30T03:47:25.643033934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86f46594fd-dcpgz,Uid:64740bc6-ef82-440a-9986-14e9a7878d23,Namespace:calico-system,Attempt:0,} returns sandbox id \"7edda1c7475125ac4bbf4a939c85ffbd22fa174d654ea6fbf40f74248ecd2850\"" Apr 30 03:47:27.302638 kubelet[2723]: E0430 03:47:27.302568 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshpv" podUID="91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb" Apr 30 03:47:27.631197 containerd[1500]: time="2025-04-30T03:47:27.630832976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:27.632454 containerd[1500]: time="2025-04-30T03:47:27.632391186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:47:27.633877 containerd[1500]: time="2025-04-30T03:47:27.633823713Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:27.638500 containerd[1500]: time="2025-04-30T03:47:27.637645487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:27.638500 containerd[1500]: time="2025-04-30T03:47:27.638303220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.011527324s" Apr 30 03:47:27.638500 containerd[1500]: time="2025-04-30T03:47:27.638362431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:47:27.642428 containerd[1500]: time="2025-04-30T03:47:27.642396382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:47:27.643511 containerd[1500]: time="2025-04-30T03:47:27.643459446Z" level=info msg="CreateContainer within sandbox \"342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:47:27.664557 containerd[1500]: time="2025-04-30T03:47:27.664495068Z" level=info msg="CreateContainer within sandbox \"342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a\"" Apr 30 03:47:27.668144 containerd[1500]: time="2025-04-30T03:47:27.668085518Z" level=info msg="StartContainer for \"6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a\"" Apr 30 03:47:27.712835 systemd[1]: Started cri-containerd-6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a.scope - libcontainer container 6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a. Apr 30 03:47:27.750802 containerd[1500]: time="2025-04-30T03:47:27.750638711Z" level=info msg="StartContainer for \"6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a\" returns successfully" Apr 30 03:47:27.766757 systemd[1]: cri-containerd-6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a.scope: Deactivated successfully. Apr 30 03:47:27.792534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a-rootfs.mount: Deactivated successfully. Apr 30 03:47:27.820525 containerd[1500]: time="2025-04-30T03:47:27.820440651Z" level=info msg="shim disconnected" id=6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a namespace=k8s.io Apr 30 03:47:27.820525 containerd[1500]: time="2025-04-30T03:47:27.820509500Z" level=warning msg="cleaning up after shim disconnected" id=6e8c37fb3a53e7c321875fa64d037124e8d8fa38c92a06591efc95e670c20c5a namespace=k8s.io Apr 30 03:47:27.820525 containerd[1500]: time="2025-04-30T03:47:27.820520871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:47:29.302875 kubelet[2723]: E0430 03:47:29.302818 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshpv" podUID="91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb" Apr 30 03:47:29.681142 containerd[1500]: time="2025-04-30T03:47:29.681064658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:29.682356 containerd[1500]: time="2025-04-30T03:47:29.682285837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:47:29.683386 containerd[1500]: time="2025-04-30T03:47:29.683335225Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:29.685279 containerd[1500]: time="2025-04-30T03:47:29.685261965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:29.685931 containerd[1500]: time="2025-04-30T03:47:29.685782261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.043260454s" Apr 30 03:47:29.685931 containerd[1500]: time="2025-04-30T03:47:29.685820112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:47:29.686850 containerd[1500]: time="2025-04-30T03:47:29.686759103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:47:29.701382 containerd[1500]: time="2025-04-30T03:47:29.701160670Z" level=info msg="CreateContainer within sandbox \"7edda1c7475125ac4bbf4a939c85ffbd22fa174d654ea6fbf40f74248ecd2850\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:47:29.716860 containerd[1500]: time="2025-04-30T03:47:29.716741989Z" level=info msg="CreateContainer within sandbox \"7edda1c7475125ac4bbf4a939c85ffbd22fa174d654ea6fbf40f74248ecd2850\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b87bd0032077369d3ced71339b6f8c92ad17acded0ac4a066d2f385ab811ec16\"" Apr 30 03:47:29.717721 containerd[1500]: time="2025-04-30T03:47:29.717488117Z" level=info msg="StartContainer for \"b87bd0032077369d3ced71339b6f8c92ad17acded0ac4a066d2f385ab811ec16\"" Apr 30 03:47:29.770822 systemd[1]: Started cri-containerd-b87bd0032077369d3ced71339b6f8c92ad17acded0ac4a066d2f385ab811ec16.scope - libcontainer container b87bd0032077369d3ced71339b6f8c92ad17acded0ac4a066d2f385ab811ec16. Apr 30 03:47:29.815915 containerd[1500]: time="2025-04-30T03:47:29.815861678Z" level=info msg="StartContainer for \"b87bd0032077369d3ced71339b6f8c92ad17acded0ac4a066d2f385ab811ec16\" returns successfully" Apr 30 03:47:30.437178 kubelet[2723]: I0430 03:47:30.437105 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86f46594fd-dcpgz" podStartSLOduration=1.394883077 podStartE2EDuration="5.437086974s" podCreationTimestamp="2025-04-30 03:47:25 +0000 UTC" firstStartedPulling="2025-04-30 03:47:25.644405455 +0000 UTC m=+13.478642062" lastFinishedPulling="2025-04-30 03:47:29.686609352 +0000 UTC m=+17.520845959" observedRunningTime="2025-04-30 03:47:30.43629554 +0000 UTC m=+18.270532188" watchObservedRunningTime="2025-04-30 03:47:30.437086974 +0000 UTC m=+18.271323571" Apr 30 03:47:31.306799 kubelet[2723]: E0430 03:47:31.303478 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshpv" podUID="91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb" Apr 30 03:47:31.427640 kubelet[2723]: I0430 03:47:31.427604 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:33.274773 containerd[1500]: time="2025-04-30T03:47:33.274689419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:33.276395 containerd[1500]: time="2025-04-30T03:47:33.276309987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:47:33.277578 containerd[1500]: time="2025-04-30T03:47:33.277283832Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:33.280219 containerd[1500]: time="2025-04-30T03:47:33.280187175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:33.281205 containerd[1500]: time="2025-04-30T03:47:33.281162914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.594378824s" Apr 30 03:47:33.281205 containerd[1500]: time="2025-04-30T03:47:33.281200505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:47:33.286600 containerd[1500]: time="2025-04-30T03:47:33.286567726Z" level=info msg="CreateContainer within sandbox \"342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:47:33.303000 kubelet[2723]: E0430 03:47:33.302954 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kshpv" podUID="91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb" Apr 30 03:47:33.308694 containerd[1500]: time="2025-04-30T03:47:33.307935854Z" level=info msg="CreateContainer within sandbox \"342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89\"" Apr 30 03:47:33.313524 containerd[1500]: time="2025-04-30T03:47:33.312701839Z" level=info msg="StartContainer for \"277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89\"" Apr 30 03:47:33.399751 systemd[1]: run-containerd-runc-k8s.io-277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89-runc.DbhzWP.mount: Deactivated successfully. Apr 30 03:47:33.413445 systemd[1]: Started cri-containerd-277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89.scope - libcontainer container 277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89. Apr 30 03:47:33.456921 containerd[1500]: time="2025-04-30T03:47:33.456868860Z" level=info msg="StartContainer for \"277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89\" returns successfully" Apr 30 03:47:33.882473 systemd[1]: cri-containerd-277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89.scope: Deactivated successfully. Apr 30 03:47:33.969968 kubelet[2723]: I0430 03:47:33.969913 2723 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 03:47:34.000968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89-rootfs.mount: Deactivated successfully. Apr 30 03:47:34.025560 systemd[1]: Created slice kubepods-burstable-pod2c604747_1402_4ef2_98f9_261b059a82b6.slice - libcontainer container kubepods-burstable-pod2c604747_1402_4ef2_98f9_261b059a82b6.slice. Apr 30 03:47:34.047412 systemd[1]: Created slice kubepods-besteffort-pod2d5e3e80_1cba_451a_b60d_21a267c83978.slice - libcontainer container kubepods-besteffort-pod2d5e3e80_1cba_451a_b60d_21a267c83978.slice. Apr 30 03:47:34.053224 systemd[1]: Created slice kubepods-besteffort-pod9c67ed2a_f626_477b_8cd7_24b48436df7d.slice - libcontainer container kubepods-besteffort-pod9c67ed2a_f626_477b_8cd7_24b48436df7d.slice. Apr 30 03:47:34.061642 systemd[1]: Created slice kubepods-burstable-pod5e9de684_fac2_4edf_bed1_82122c48751b.slice - libcontainer container kubepods-burstable-pod5e9de684_fac2_4edf_bed1_82122c48751b.slice. Apr 30 03:47:34.064924 kubelet[2723]: I0430 03:47:34.063217 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5jb\" (UniqueName: \"kubernetes.io/projected/2c604747-1402-4ef2-98f9-261b059a82b6-kube-api-access-rj5jb\") pod \"coredns-668d6bf9bc-6c5cb\" (UID: \"2c604747-1402-4ef2-98f9-261b059a82b6\") " pod="kube-system/coredns-668d6bf9bc-6c5cb" Apr 30 03:47:34.064924 kubelet[2723]: I0430 03:47:34.063270 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e9de684-fac2-4edf-bed1-82122c48751b-config-volume\") pod \"coredns-668d6bf9bc-8bcmd\" (UID: \"5e9de684-fac2-4edf-bed1-82122c48751b\") " pod="kube-system/coredns-668d6bf9bc-8bcmd" Apr 30 03:47:34.064924 kubelet[2723]: I0430 03:47:34.063317 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d5e3e80-1cba-451a-b60d-21a267c83978-tigera-ca-bundle\") pod \"calico-kube-controllers-584b945cdb-9ntmp\" (UID: \"2d5e3e80-1cba-451a-b60d-21a267c83978\") " pod="calico-system/calico-kube-controllers-584b945cdb-9ntmp" Apr 30 03:47:34.064924 kubelet[2723]: I0430 03:47:34.063348 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkm8d\" (UniqueName: \"kubernetes.io/projected/4c3f5153-aeae-460f-bcd5-59ddd7a16065-kube-api-access-jkm8d\") pod \"calico-apiserver-55d565cbf-zxwkz\" (UID: \"4c3f5153-aeae-460f-bcd5-59ddd7a16065\") " pod="calico-apiserver/calico-apiserver-55d565cbf-zxwkz" Apr 30 03:47:34.064924 kubelet[2723]: I0430 03:47:34.063380 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c604747-1402-4ef2-98f9-261b059a82b6-config-volume\") pod \"coredns-668d6bf9bc-6c5cb\" (UID: \"2c604747-1402-4ef2-98f9-261b059a82b6\") " pod="kube-system/coredns-668d6bf9bc-6c5cb" Apr 30 03:47:34.065175 kubelet[2723]: I0430 03:47:34.063397 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c67ed2a-f626-477b-8cd7-24b48436df7d-calico-apiserver-certs\") pod \"calico-apiserver-55d565cbf-48sql\" (UID: \"9c67ed2a-f626-477b-8cd7-24b48436df7d\") " pod="calico-apiserver/calico-apiserver-55d565cbf-48sql" Apr 30 03:47:34.065175 kubelet[2723]: I0430 03:47:34.063412 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nct7\" (UniqueName: \"kubernetes.io/projected/5e9de684-fac2-4edf-bed1-82122c48751b-kube-api-access-6nct7\") pod \"coredns-668d6bf9bc-8bcmd\" (UID: \"5e9de684-fac2-4edf-bed1-82122c48751b\") " pod="kube-system/coredns-668d6bf9bc-8bcmd" Apr 30 03:47:34.065175 kubelet[2723]: I0430 03:47:34.063428 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c3f5153-aeae-460f-bcd5-59ddd7a16065-calico-apiserver-certs\") pod \"calico-apiserver-55d565cbf-zxwkz\" (UID: \"4c3f5153-aeae-460f-bcd5-59ddd7a16065\") " pod="calico-apiserver/calico-apiserver-55d565cbf-zxwkz" Apr 30 03:47:34.065175 kubelet[2723]: I0430 03:47:34.063460 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8mw\" (UniqueName: \"kubernetes.io/projected/9c67ed2a-f626-477b-8cd7-24b48436df7d-kube-api-access-9c8mw\") pod \"calico-apiserver-55d565cbf-48sql\" (UID: \"9c67ed2a-f626-477b-8cd7-24b48436df7d\") " pod="calico-apiserver/calico-apiserver-55d565cbf-48sql" Apr 30 03:47:34.065175 kubelet[2723]: I0430 03:47:34.063478 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsx7d\" (UniqueName: \"kubernetes.io/projected/2d5e3e80-1cba-451a-b60d-21a267c83978-kube-api-access-rsx7d\") pod \"calico-kube-controllers-584b945cdb-9ntmp\" (UID: \"2d5e3e80-1cba-451a-b60d-21a267c83978\") " pod="calico-system/calico-kube-controllers-584b945cdb-9ntmp" Apr 30 03:47:34.071014 containerd[1500]: time="2025-04-30T03:47:34.070356451Z" level=info msg="shim disconnected" id=277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89 namespace=k8s.io Apr 30 03:47:34.071014 containerd[1500]: time="2025-04-30T03:47:34.070412796Z" level=warning msg="cleaning up after shim disconnected" id=277a65bc6d4094706c35db2df23848d18abadecaac27e3b84ec1dc955c6a7c89 namespace=k8s.io Apr 30 03:47:34.071014 containerd[1500]: time="2025-04-30T03:47:34.070420050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:47:34.072000 systemd[1]: Created slice kubepods-besteffort-pod4c3f5153_aeae_460f_bcd5_59ddd7a16065.slice - libcontainer container kubepods-besteffort-pod4c3f5153_aeae_460f_bcd5_59ddd7a16065.slice. Apr 30 03:47:34.340198 containerd[1500]: time="2025-04-30T03:47:34.340037088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6c5cb,Uid:2c604747-1402-4ef2-98f9-261b059a82b6,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:34.352613 containerd[1500]: time="2025-04-30T03:47:34.352547060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-584b945cdb-9ntmp,Uid:2d5e3e80-1cba-451a-b60d-21a267c83978,Namespace:calico-system,Attempt:0,}" Apr 30 03:47:34.375621 containerd[1500]: time="2025-04-30T03:47:34.375560251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmd,Uid:5e9de684-fac2-4edf-bed1-82122c48751b,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:34.381416 containerd[1500]: time="2025-04-30T03:47:34.381363080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d565cbf-zxwkz,Uid:4c3f5153-aeae-460f-bcd5-59ddd7a16065,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:47:34.392736 containerd[1500]: time="2025-04-30T03:47:34.392496320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d565cbf-48sql,Uid:9c67ed2a-f626-477b-8cd7-24b48436df7d,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:47:34.482609 containerd[1500]: time="2025-04-30T03:47:34.482562263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:47:34.651925 containerd[1500]: time="2025-04-30T03:47:34.651864797Z" level=error msg="Failed to destroy network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.652171 containerd[1500]: time="2025-04-30T03:47:34.652121999Z" level=error msg="Failed to destroy network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.655583 containerd[1500]: time="2025-04-30T03:47:34.655544475Z" level=error msg="encountered an error cleaning up failed sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.655654 containerd[1500]: time="2025-04-30T03:47:34.655602413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmd,Uid:5e9de684-fac2-4edf-bed1-82122c48751b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.655850 containerd[1500]: time="2025-04-30T03:47:34.655818929Z" level=error msg="encountered an error cleaning up failed sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.655950 containerd[1500]: time="2025-04-30T03:47:34.655934395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d565cbf-48sql,Uid:9c67ed2a-f626-477b-8cd7-24b48436df7d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.664071 containerd[1500]: time="2025-04-30T03:47:34.661864041Z" level=error msg="Failed to destroy network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.664071 containerd[1500]: time="2025-04-30T03:47:34.662526834Z" level=error msg="encountered an error cleaning up failed sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.664071 containerd[1500]: time="2025-04-30T03:47:34.662562671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-584b945cdb-9ntmp,Uid:2d5e3e80-1cba-451a-b60d-21a267c83978,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.664071 containerd[1500]: time="2025-04-30T03:47:34.662631260Z" level=error msg="Failed to destroy network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.664071 containerd[1500]: time="2025-04-30T03:47:34.662986896Z" level=error msg="encountered an error cleaning up failed sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.664071 containerd[1500]: time="2025-04-30T03:47:34.663037201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d565cbf-zxwkz,Uid:4c3f5153-aeae-460f-bcd5-59ddd7a16065,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.664382 kubelet[2723]: E0430 03:47:34.661971 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.664382 kubelet[2723]: E0430 03:47:34.662041 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmd" Apr 30 03:47:34.664382 kubelet[2723]: E0430 03:47:34.662066 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmd" Apr 30 03:47:34.665448 kubelet[2723]: E0430 03:47:34.662115 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bcmd_kube-system(5e9de684-fac2-4edf-bed1-82122c48751b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bcmd_kube-system(5e9de684-fac2-4edf-bed1-82122c48751b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bcmd" podUID="5e9de684-fac2-4edf-bed1-82122c48751b" Apr 30 03:47:34.665448 kubelet[2723]: E0430 03:47:34.662696 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.665448 kubelet[2723]: E0430 03:47:34.662722 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d565cbf-48sql" Apr 30 03:47:34.665545 kubelet[2723]: E0430 03:47:34.662737 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d565cbf-48sql" Apr 30 03:47:34.665545 kubelet[2723]: E0430 03:47:34.662784 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d565cbf-48sql_calico-apiserver(9c67ed2a-f626-477b-8cd7-24b48436df7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d565cbf-48sql_calico-apiserver(9c67ed2a-f626-477b-8cd7-24b48436df7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d565cbf-48sql" podUID="9c67ed2a-f626-477b-8cd7-24b48436df7d" Apr 30 03:47:34.665545 kubelet[2723]: E0430 03:47:34.663148 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.665648 kubelet[2723]: E0430 03:47:34.663177 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d565cbf-zxwkz" Apr 30 03:47:34.665648 kubelet[2723]: E0430 03:47:34.663188 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d565cbf-zxwkz" Apr 30 03:47:34.665648 kubelet[2723]: E0430 03:47:34.663208 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d565cbf-zxwkz_calico-apiserver(4c3f5153-aeae-460f-bcd5-59ddd7a16065)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d565cbf-zxwkz_calico-apiserver(4c3f5153-aeae-460f-bcd5-59ddd7a16065)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d565cbf-zxwkz" podUID="4c3f5153-aeae-460f-bcd5-59ddd7a16065" Apr 30 03:47:34.666093 kubelet[2723]: E0430 03:47:34.663231 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.666093 kubelet[2723]: E0430 03:47:34.663248 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-584b945cdb-9ntmp" Apr 30 03:47:34.666093 kubelet[2723]: E0430 03:47:34.663258 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-584b945cdb-9ntmp" Apr 30 03:47:34.666169 kubelet[2723]: E0430 03:47:34.663280 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-584b945cdb-9ntmp_calico-system(2d5e3e80-1cba-451a-b60d-21a267c83978)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-584b945cdb-9ntmp_calico-system(2d5e3e80-1cba-451a-b60d-21a267c83978)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-584b945cdb-9ntmp" podUID="2d5e3e80-1cba-451a-b60d-21a267c83978" Apr 30 03:47:34.669704 containerd[1500]: time="2025-04-30T03:47:34.669634398Z" level=error msg="Failed to destroy network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.670080 containerd[1500]: time="2025-04-30T03:47:34.670045549Z" level=error msg="encountered an error cleaning up failed sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.670134 containerd[1500]: time="2025-04-30T03:47:34.670099390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6c5cb,Uid:2c604747-1402-4ef2-98f9-261b059a82b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.670296 kubelet[2723]: E0430 03:47:34.670257 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.670296 kubelet[2723]: E0430 03:47:34.670295 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6c5cb" Apr 30 03:47:34.670296 kubelet[2723]: E0430 03:47:34.670310 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6c5cb" Apr 30 03:47:34.670471 kubelet[2723]: E0430 03:47:34.670351 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6c5cb_kube-system(2c604747-1402-4ef2-98f9-261b059a82b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6c5cb_kube-system(2c604747-1402-4ef2-98f9-261b059a82b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6c5cb" podUID="2c604747-1402-4ef2-98f9-261b059a82b6" Apr 30 03:47:35.303979 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a-shm.mount: Deactivated successfully. Apr 30 03:47:35.304148 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62-shm.mount: Deactivated successfully. Apr 30 03:47:35.313319 systemd[1]: Created slice kubepods-besteffort-pod91a9afba_5d9a_48f2_ad03_1bd0e9fa98bb.slice - libcontainer container kubepods-besteffort-pod91a9afba_5d9a_48f2_ad03_1bd0e9fa98bb.slice. Apr 30 03:47:35.316407 containerd[1500]: time="2025-04-30T03:47:35.316358322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kshpv,Uid:91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb,Namespace:calico-system,Attempt:0,}" Apr 30 03:47:35.416718 containerd[1500]: time="2025-04-30T03:47:35.416576317Z" level=error msg="Failed to destroy network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.419851 containerd[1500]: time="2025-04-30T03:47:35.417137529Z" level=error msg="encountered an error cleaning up failed sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.419851 containerd[1500]: time="2025-04-30T03:47:35.417213081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kshpv,Uid:91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.420177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102-shm.mount: Deactivated successfully. Apr 30 03:47:35.422163 kubelet[2723]: E0430 03:47:35.420159 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.422163 kubelet[2723]: E0430 03:47:35.420259 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kshpv" Apr 30 03:47:35.422163 kubelet[2723]: E0430 03:47:35.420304 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kshpv" Apr 30 03:47:35.422370 kubelet[2723]: E0430 03:47:35.420403 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kshpv_calico-system(91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kshpv_calico-system(91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kshpv" podUID="91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb" Apr 30 03:47:35.484506 kubelet[2723]: I0430 03:47:35.484437 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:47:35.492863 kubelet[2723]: I0430 03:47:35.492549 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:47:35.496026 containerd[1500]: time="2025-04-30T03:47:35.495961948Z" level=info msg="StopPodSandbox for \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\"" Apr 30 03:47:35.500812 containerd[1500]: time="2025-04-30T03:47:35.498974135Z" level=info msg="Ensure that sandbox 713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a in task-service has been cleanup successfully" Apr 30 03:47:35.502536 containerd[1500]: time="2025-04-30T03:47:35.502496378Z" level=info msg="StopPodSandbox for \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\"" Apr 30 03:47:35.502935 containerd[1500]: time="2025-04-30T03:47:35.502900916Z" level=info msg="Ensure that sandbox 2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90 in task-service has been cleanup successfully" Apr 30 03:47:35.505794 kubelet[2723]: I0430 03:47:35.503869 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:47:35.507301 containerd[1500]: time="2025-04-30T03:47:35.507198162Z" level=info msg="StopPodSandbox for \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\"" Apr 30 03:47:35.507598 containerd[1500]: time="2025-04-30T03:47:35.507543490Z" level=info msg="Ensure that sandbox 1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102 in task-service has been cleanup successfully" Apr 30 03:47:35.516456 kubelet[2723]: I0430 03:47:35.516401 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:47:35.521135 containerd[1500]: time="2025-04-30T03:47:35.520504437Z" level=info msg="StopPodSandbox for \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\"" Apr 30 03:47:35.521904 containerd[1500]: time="2025-04-30T03:47:35.521883131Z" level=info msg="Ensure that sandbox b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62 in task-service has been cleanup successfully" Apr 30 03:47:35.526510 kubelet[2723]: I0430 03:47:35.526462 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:47:35.528496 containerd[1500]: time="2025-04-30T03:47:35.527941038Z" level=info msg="StopPodSandbox for \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\"" Apr 30 03:47:35.531565 containerd[1500]: time="2025-04-30T03:47:35.531519777Z" level=info msg="Ensure that sandbox 3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f in task-service has been cleanup successfully" Apr 30 03:47:35.539485 kubelet[2723]: I0430 03:47:35.539439 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:47:35.541832 containerd[1500]: time="2025-04-30T03:47:35.541280615Z" level=info msg="StopPodSandbox for \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\"" Apr 30 03:47:35.542254 containerd[1500]: time="2025-04-30T03:47:35.541971440Z" level=info msg="Ensure that sandbox 0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915 in task-service has been cleanup successfully" Apr 30 03:47:35.604916 containerd[1500]: time="2025-04-30T03:47:35.604760468Z" level=error msg="StopPodSandbox for \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\" failed" error="failed to destroy network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.607079 kubelet[2723]: E0430 03:47:35.605411 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:47:35.607079 kubelet[2723]: E0430 03:47:35.605501 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102"} Apr 30 03:47:35.607079 kubelet[2723]: E0430 03:47:35.605572 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.607079 kubelet[2723]: E0430 03:47:35.605602 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kshpv" podUID="91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb" Apr 30 03:47:35.627905 containerd[1500]: time="2025-04-30T03:47:35.627833392Z" level=error msg="StopPodSandbox for \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\" failed" error="failed to destroy network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.628501 containerd[1500]: time="2025-04-30T03:47:35.627980118Z" level=error msg="StopPodSandbox for \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\" failed" error="failed to destroy network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.628697 kubelet[2723]: E0430 03:47:35.628638 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:47:35.628763 kubelet[2723]: E0430 03:47:35.628713 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a"} Apr 30 03:47:35.628763 kubelet[2723]: E0430 03:47:35.628746 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d5e3e80-1cba-451a-b60d-21a267c83978\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.628999 kubelet[2723]: E0430 03:47:35.628781 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d5e3e80-1cba-451a-b60d-21a267c83978\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-584b945cdb-9ntmp" podUID="2d5e3e80-1cba-451a-b60d-21a267c83978" Apr 30 03:47:35.628999 kubelet[2723]: E0430 03:47:35.628812 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:47:35.628999 kubelet[2723]: E0430 03:47:35.628824 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f"} Apr 30 03:47:35.628999 kubelet[2723]: E0430 03:47:35.628843 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c3f5153-aeae-460f-bcd5-59ddd7a16065\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.629114 kubelet[2723]: E0430 03:47:35.628860 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c3f5153-aeae-460f-bcd5-59ddd7a16065\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d565cbf-zxwkz" podUID="4c3f5153-aeae-460f-bcd5-59ddd7a16065" Apr 30 03:47:35.636044 containerd[1500]: time="2025-04-30T03:47:35.635894474Z" level=error msg="StopPodSandbox for \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\" failed" error="failed to destroy network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.636044 containerd[1500]: time="2025-04-30T03:47:35.635898651Z" level=error msg="StopPodSandbox for \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\" failed" error="failed to destroy network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.636353 kubelet[2723]: E0430 03:47:35.636174 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:47:35.636353 kubelet[2723]: E0430 03:47:35.636203 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90"} Apr 30 03:47:35.636353 kubelet[2723]: E0430 03:47:35.636227 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c67ed2a-f626-477b-8cd7-24b48436df7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.636353 kubelet[2723]: E0430 03:47:35.636245 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c67ed2a-f626-477b-8cd7-24b48436df7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d565cbf-48sql" podUID="9c67ed2a-f626-477b-8cd7-24b48436df7d" Apr 30 03:47:35.636548 kubelet[2723]: E0430 03:47:35.636144 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:47:35.636548 kubelet[2723]: E0430 03:47:35.636284 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62"} Apr 30 03:47:35.636548 kubelet[2723]: E0430 03:47:35.636301 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c604747-1402-4ef2-98f9-261b059a82b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.636548 kubelet[2723]: E0430 03:47:35.636316 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c604747-1402-4ef2-98f9-261b059a82b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6c5cb" podUID="2c604747-1402-4ef2-98f9-261b059a82b6" Apr 30 03:47:35.642297 containerd[1500]: time="2025-04-30T03:47:35.641898770Z" level=error msg="StopPodSandbox for \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\" failed" error="failed to destroy network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.642460 kubelet[2723]: E0430 03:47:35.642123 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:47:35.642460 kubelet[2723]: E0430 03:47:35.642164 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915"} Apr 30 03:47:35.642460 kubelet[2723]: E0430 03:47:35.642192 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e9de684-fac2-4edf-bed1-82122c48751b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.642460 kubelet[2723]: E0430 03:47:35.642252 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e9de684-fac2-4edf-bed1-82122c48751b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bcmd" podUID="5e9de684-fac2-4edf-bed1-82122c48751b" Apr 30 03:47:41.543426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4235256875.mount: Deactivated successfully. Apr 30 03:47:41.655112 containerd[1500]: time="2025-04-30T03:47:41.642042672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:41.659060 containerd[1500]: time="2025-04-30T03:47:41.659012235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:47:41.681839 containerd[1500]: time="2025-04-30T03:47:41.681781000Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:41.684944 containerd[1500]: time="2025-04-30T03:47:41.684877505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:41.686140 containerd[1500]: time="2025-04-30T03:47:41.685578159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.202958568s" Apr 30 03:47:41.686140 containerd[1500]: time="2025-04-30T03:47:41.685616521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:47:41.849685 containerd[1500]: time="2025-04-30T03:47:41.849496748Z" level=info msg="CreateContainer within sandbox \"342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:47:42.053953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460741565.mount: Deactivated successfully. Apr 30 03:47:42.128368 containerd[1500]: time="2025-04-30T03:47:42.128192151Z" level=info msg="CreateContainer within sandbox \"342ab55aaf5ecf0ae631fc530c1d7dadd293f54ebdfd854f143675acb0e41670\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"29197d1caa13c2b7d41914bd106f280be7f6e5706959bc8c00792a65025e6067\"" Apr 30 03:47:42.138164 containerd[1500]: time="2025-04-30T03:47:42.136989263Z" level=info msg="StartContainer for \"29197d1caa13c2b7d41914bd106f280be7f6e5706959bc8c00792a65025e6067\"" Apr 30 03:47:42.324935 systemd[1]: Started cri-containerd-29197d1caa13c2b7d41914bd106f280be7f6e5706959bc8c00792a65025e6067.scope - libcontainer container 29197d1caa13c2b7d41914bd106f280be7f6e5706959bc8c00792a65025e6067. Apr 30 03:47:42.423379 containerd[1500]: time="2025-04-30T03:47:42.423342574Z" level=info msg="StartContainer for \"29197d1caa13c2b7d41914bd106f280be7f6e5706959bc8c00792a65025e6067\" returns successfully" Apr 30 03:47:42.553467 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:47:42.554618 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:47:42.668089 kubelet[2723]: I0430 03:47:42.664975 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hjz68" podStartSLOduration=1.556121842 podStartE2EDuration="17.630277978s" podCreationTimestamp="2025-04-30 03:47:25 +0000 UTC" firstStartedPulling="2025-04-30 03:47:25.626232459 +0000 UTC m=+13.460469056" lastFinishedPulling="2025-04-30 03:47:41.700388564 +0000 UTC m=+29.534625192" observedRunningTime="2025-04-30 03:47:42.629730863 +0000 UTC m=+30.463967470" watchObservedRunningTime="2025-04-30 03:47:42.630277978 +0000 UTC m=+30.464514586" Apr 30 03:47:43.571132 kubelet[2723]: I0430 03:47:43.571083 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:43.823999 kubelet[2723]: I0430 03:47:43.823126 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:44.256698 kernel: bpftool[3914]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:47:44.562449 systemd-networkd[1397]: vxlan.calico: Link UP Apr 30 03:47:44.562462 systemd-networkd[1397]: vxlan.calico: Gained carrier Apr 30 03:47:45.681964 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Apr 30 03:47:46.305790 containerd[1500]: time="2025-04-30T03:47:46.305663404Z" level=info msg="StopPodSandbox for \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\"" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.442 [INFO][4000] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.444 [INFO][4000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" iface="eth0" netns="/var/run/netns/cni-d4595f61-1a61-fcb5-226f-f65e4c322e60" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.444 [INFO][4000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" iface="eth0" netns="/var/run/netns/cni-d4595f61-1a61-fcb5-226f-f65e4c322e60" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.446 [INFO][4000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" iface="eth0" netns="/var/run/netns/cni-d4595f61-1a61-fcb5-226f-f65e4c322e60" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.446 [INFO][4000] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.446 [INFO][4000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.604 [INFO][4007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.606 [INFO][4007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.607 [INFO][4007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.620 [WARNING][4007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.620 [INFO][4007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.622 [INFO][4007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:46.629305 containerd[1500]: 2025-04-30 03:47:46.624 [INFO][4000] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:47:46.629305 containerd[1500]: time="2025-04-30T03:47:46.627304557Z" level=info msg="TearDown network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\" successfully" Apr 30 03:47:46.629305 containerd[1500]: time="2025-04-30T03:47:46.627340905Z" level=info msg="StopPodSandbox for \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\" returns successfully" Apr 30 03:47:46.631511 systemd[1]: run-netns-cni\x2dd4595f61\x2d1a61\x2dfcb5\x2d226f\x2df65e4c322e60.mount: Deactivated successfully. Apr 30 03:47:46.634388 containerd[1500]: time="2025-04-30T03:47:46.633017678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d565cbf-48sql,Uid:9c67ed2a-f626-477b-8cd7-24b48436df7d,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:47:46.822580 systemd-networkd[1397]: cali63c405e3130: Link UP Apr 30 03:47:46.823185 systemd-networkd[1397]: cali63c405e3130: Gained carrier Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.738 [INFO][4013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0 calico-apiserver-55d565cbf- calico-apiserver 9c67ed2a-f626-477b-8cd7-24b48436df7d 757 0 2025-04-30 03:47:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d565cbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-b-745f04f342 calico-apiserver-55d565cbf-48sql eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali63c405e3130 [] []}} ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-48sql" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.738 [INFO][4013] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-48sql" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.773 [INFO][4025] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" HandleID="k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.782 [INFO][4025] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" HandleID="k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334d90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-b-745f04f342", "pod":"calico-apiserver-55d565cbf-48sql", "timestamp":"2025-04-30 03:47:46.773382727 +0000 UTC"}, Hostname:"ci-4081-3-3-b-745f04f342", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.783 [INFO][4025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.783 [INFO][4025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.783 [INFO][4025] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-b-745f04f342' Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.786 [INFO][4025] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.793 [INFO][4025] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.797 [INFO][4025] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.799 [INFO][4025] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.801 [INFO][4025] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.801 [INFO][4025] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.803 [INFO][4025] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31 Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.809 [INFO][4025] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.815 [INFO][4025] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.193/26] block=192.168.94.192/26 handle="k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.816 [INFO][4025] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.193/26] handle="k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.816 [INFO][4025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:46.840656 containerd[1500]: 2025-04-30 03:47:46.816 [INFO][4025] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.193/26] IPv6=[] ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" HandleID="k8s-pod-network.6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.841204 containerd[1500]: 2025-04-30 03:47:46.818 [INFO][4013] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-48sql" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0", GenerateName:"calico-apiserver-55d565cbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c67ed2a-f626-477b-8cd7-24b48436df7d", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d565cbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"", Pod:"calico-apiserver-55d565cbf-48sql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63c405e3130", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:46.841204 containerd[1500]: 2025-04-30 03:47:46.819 [INFO][4013] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.193/32] ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-48sql" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.841204 containerd[1500]: 2025-04-30 03:47:46.819 [INFO][4013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63c405e3130 ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-48sql" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.841204 containerd[1500]: 2025-04-30 03:47:46.823 [INFO][4013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-48sql" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.841204 containerd[1500]: 2025-04-30 03:47:46.824 [INFO][4013] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-48sql" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0", GenerateName:"calico-apiserver-55d565cbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c67ed2a-f626-477b-8cd7-24b48436df7d", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d565cbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31", Pod:"calico-apiserver-55d565cbf-48sql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63c405e3130", MAC:"86:cb:8a:39:1b:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:46.841204 containerd[1500]: 2025-04-30 03:47:46.834 [INFO][4013] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-48sql" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:47:46.874300 containerd[1500]: time="2025-04-30T03:47:46.874008433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:46.874300 containerd[1500]: time="2025-04-30T03:47:46.874065630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:46.874300 containerd[1500]: time="2025-04-30T03:47:46.874082962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:46.874300 containerd[1500]: time="2025-04-30T03:47:46.874165748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:46.893831 systemd[1]: Started cri-containerd-6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31.scope - libcontainer container 6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31. Apr 30 03:47:46.934451 containerd[1500]: time="2025-04-30T03:47:46.934305204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d565cbf-48sql,Uid:9c67ed2a-f626-477b-8cd7-24b48436df7d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31\"" Apr 30 03:47:46.936950 containerd[1500]: time="2025-04-30T03:47:46.936911099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:47:48.113874 systemd-networkd[1397]: cali63c405e3130: Gained IPv6LL Apr 30 03:47:48.307325 containerd[1500]: time="2025-04-30T03:47:48.304955826Z" level=info msg="StopPodSandbox for \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\"" Apr 30 03:47:48.307325 containerd[1500]: time="2025-04-30T03:47:48.306836502Z" level=info msg="StopPodSandbox for \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\"" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.403 [INFO][4114] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.405 [INFO][4114] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" iface="eth0" netns="/var/run/netns/cni-759d376f-ad04-f14f-b984-47e5c86d6334" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.405 [INFO][4114] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" iface="eth0" netns="/var/run/netns/cni-759d376f-ad04-f14f-b984-47e5c86d6334" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.405 [INFO][4114] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" iface="eth0" netns="/var/run/netns/cni-759d376f-ad04-f14f-b984-47e5c86d6334" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.405 [INFO][4114] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.406 [INFO][4114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.434 [INFO][4134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.434 [INFO][4134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.435 [INFO][4134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.441 [WARNING][4134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.442 [INFO][4134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.445 [INFO][4134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:48.451628 containerd[1500]: 2025-04-30 03:47:48.448 [INFO][4114] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:47:48.455236 containerd[1500]: time="2025-04-30T03:47:48.451794432Z" level=info msg="TearDown network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\" successfully" Apr 30 03:47:48.455236 containerd[1500]: time="2025-04-30T03:47:48.451820030Z" level=info msg="StopPodSandbox for \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\" returns successfully" Apr 30 03:47:48.455236 containerd[1500]: time="2025-04-30T03:47:48.453062159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d565cbf-zxwkz,Uid:4c3f5153-aeae-460f-bcd5-59ddd7a16065,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:47:48.458532 systemd[1]: run-netns-cni\x2d759d376f\x2dad04\x2df14f\x2db984\x2d47e5c86d6334.mount: Deactivated successfully. Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.400 [INFO][4115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.400 [INFO][4115] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" iface="eth0" netns="/var/run/netns/cni-159da3f9-9532-fe90-4747-379020f34d22" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.401 [INFO][4115] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" iface="eth0" netns="/var/run/netns/cni-159da3f9-9532-fe90-4747-379020f34d22" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.401 [INFO][4115] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" iface="eth0" netns="/var/run/netns/cni-159da3f9-9532-fe90-4747-379020f34d22" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.401 [INFO][4115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.402 [INFO][4115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.440 [INFO][4129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.440 [INFO][4129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.445 [INFO][4129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.462 [WARNING][4129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.463 [INFO][4129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.465 [INFO][4129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:48.469466 containerd[1500]: 2025-04-30 03:47:48.467 [INFO][4115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:47:48.470904 containerd[1500]: time="2025-04-30T03:47:48.470819650Z" level=info msg="TearDown network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\" successfully" Apr 30 03:47:48.470904 containerd[1500]: time="2025-04-30T03:47:48.470887177Z" level=info msg="StopPodSandbox for \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\" returns successfully" Apr 30 03:47:48.472855 containerd[1500]: time="2025-04-30T03:47:48.472596471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmd,Uid:5e9de684-fac2-4edf-bed1-82122c48751b,Namespace:kube-system,Attempt:1,}" Apr 30 03:47:48.475973 systemd[1]: run-netns-cni\x2d159da3f9\x2d9532\x2dfe90\x2d4747\x2d379020f34d22.mount: Deactivated successfully. Apr 30 03:47:48.617972 systemd-networkd[1397]: cali1e2af2486eb: Link UP Apr 30 03:47:48.618287 systemd-networkd[1397]: cali1e2af2486eb: Gained carrier Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.537 [INFO][4143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0 calico-apiserver-55d565cbf- calico-apiserver 4c3f5153-aeae-460f-bcd5-59ddd7a16065 768 0 2025-04-30 03:47:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d565cbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-b-745f04f342 calico-apiserver-55d565cbf-zxwkz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e2af2486eb [] []}} ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-zxwkz" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.538 [INFO][4143] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-zxwkz" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.573 [INFO][4168] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" HandleID="k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.587 [INFO][4168] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" HandleID="k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031baf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-b-745f04f342", "pod":"calico-apiserver-55d565cbf-zxwkz", "timestamp":"2025-04-30 03:47:48.573934706 +0000 UTC"}, Hostname:"ci-4081-3-3-b-745f04f342", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.587 [INFO][4168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.587 [INFO][4168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.587 [INFO][4168] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-b-745f04f342' Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.589 [INFO][4168] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.594 [INFO][4168] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.598 [INFO][4168] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.600 [INFO][4168] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.602 [INFO][4168] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.602 [INFO][4168] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.604 [INFO][4168] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.607 [INFO][4168] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.612 [INFO][4168] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.194/26] block=192.168.94.192/26 handle="k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.613 [INFO][4168] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.194/26] handle="k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.613 [INFO][4168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:48.639897 containerd[1500]: 2025-04-30 03:47:48.613 [INFO][4168] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.194/26] IPv6=[] ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" HandleID="k8s-pod-network.99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.641966 containerd[1500]: 2025-04-30 03:47:48.615 [INFO][4143] cni-plugin/k8s.go 386: Populated endpoint ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-zxwkz" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0", GenerateName:"calico-apiserver-55d565cbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c3f5153-aeae-460f-bcd5-59ddd7a16065", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d565cbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"", Pod:"calico-apiserver-55d565cbf-zxwkz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2af2486eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:48.641966 containerd[1500]: 2025-04-30 03:47:48.615 [INFO][4143] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.194/32] ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-zxwkz" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.641966 containerd[1500]: 2025-04-30 03:47:48.615 [INFO][4143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e2af2486eb ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-zxwkz" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.641966 containerd[1500]: 2025-04-30 03:47:48.618 [INFO][4143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-zxwkz" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.641966 containerd[1500]: 2025-04-30 03:47:48.619 [INFO][4143] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-zxwkz" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0", GenerateName:"calico-apiserver-55d565cbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c3f5153-aeae-460f-bcd5-59ddd7a16065", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d565cbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd", Pod:"calico-apiserver-55d565cbf-zxwkz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2af2486eb", MAC:"fa:3c:78:19:e3:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:48.641966 containerd[1500]: 2025-04-30 03:47:48.636 [INFO][4143] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd" Namespace="calico-apiserver" Pod="calico-apiserver-55d565cbf-zxwkz" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:47:48.700074 containerd[1500]: time="2025-04-30T03:47:48.699303630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:48.700074 containerd[1500]: time="2025-04-30T03:47:48.699404640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:48.700074 containerd[1500]: time="2025-04-30T03:47:48.699433053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:48.700074 containerd[1500]: time="2025-04-30T03:47:48.699589036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:48.732376 systemd[1]: Started cri-containerd-99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd.scope - libcontainer container 99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd. Apr 30 03:47:48.756420 systemd-networkd[1397]: cali557b782bde5: Link UP Apr 30 03:47:48.756598 systemd-networkd[1397]: cali557b782bde5: Gained carrier Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.552 [INFO][4154] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0 coredns-668d6bf9bc- kube-system 5e9de684-fac2-4edf-bed1-82122c48751b 767 0 2025-04-30 03:47:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-b-745f04f342 coredns-668d6bf9bc-8bcmd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali557b782bde5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmd" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.552 [INFO][4154] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmd" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.582 [INFO][4173] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" HandleID="k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.593 [INFO][4173] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" HandleID="k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-b-745f04f342", "pod":"coredns-668d6bf9bc-8bcmd", "timestamp":"2025-04-30 03:47:48.582273118 +0000 UTC"}, Hostname:"ci-4081-3-3-b-745f04f342", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.593 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.613 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.613 [INFO][4173] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-b-745f04f342' Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.693 [INFO][4173] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.702 [INFO][4173] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.712 [INFO][4173] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.715 [INFO][4173] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.722 [INFO][4173] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.722 [INFO][4173] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.729 [INFO][4173] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.736 [INFO][4173] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.743 [INFO][4173] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.195/26] block=192.168.94.192/26 handle="k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.743 [INFO][4173] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.195/26] handle="k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.743 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:48.788878 containerd[1500]: 2025-04-30 03:47:48.743 [INFO][4173] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.195/26] IPv6=[] ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" HandleID="k8s-pod-network.c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.789467 containerd[1500]: 2025-04-30 03:47:48.747 [INFO][4154] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmd" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e9de684-fac2-4edf-bed1-82122c48751b", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"", Pod:"coredns-668d6bf9bc-8bcmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali557b782bde5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:48.789467 containerd[1500]: 2025-04-30 03:47:48.747 [INFO][4154] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.195/32] ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmd" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.789467 containerd[1500]: 2025-04-30 03:47:48.747 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali557b782bde5 ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmd" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.789467 containerd[1500]: 2025-04-30 03:47:48.755 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmd" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.789467 containerd[1500]: 2025-04-30 03:47:48.756 [INFO][4154] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmd" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e9de684-fac2-4edf-bed1-82122c48751b", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c", Pod:"coredns-668d6bf9bc-8bcmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali557b782bde5", MAC:"e6:92:ea:a7:f2:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:48.789467 containerd[1500]: 2025-04-30 03:47:48.783 [INFO][4154] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmd" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:47:48.818521 containerd[1500]: time="2025-04-30T03:47:48.818476501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d565cbf-zxwkz,Uid:4c3f5153-aeae-460f-bcd5-59ddd7a16065,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd\"" Apr 30 03:47:48.846999 containerd[1500]: time="2025-04-30T03:47:48.846816892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:48.846999 containerd[1500]: time="2025-04-30T03:47:48.846904385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:48.846999 containerd[1500]: time="2025-04-30T03:47:48.846925104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:48.848080 containerd[1500]: time="2025-04-30T03:47:48.847978800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:48.871869 systemd[1]: Started cri-containerd-c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c.scope - libcontainer container c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c. Apr 30 03:47:48.917201 containerd[1500]: time="2025-04-30T03:47:48.917126826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmd,Uid:5e9de684-fac2-4edf-bed1-82122c48751b,Namespace:kube-system,Attempt:1,} returns sandbox id \"c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c\"" Apr 30 03:47:48.928934 containerd[1500]: time="2025-04-30T03:47:48.928844313Z" level=info msg="CreateContainer within sandbox \"c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:47:48.954818 containerd[1500]: time="2025-04-30T03:47:48.954733849Z" level=info msg="CreateContainer within sandbox \"c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6c6b654f123f7f2524a40fafbf0db13cb7fd0b6d76e2aebad91f3c1b957278e\"" Apr 30 03:47:48.959338 containerd[1500]: time="2025-04-30T03:47:48.958261943Z" level=info msg="StartContainer for \"e6c6b654f123f7f2524a40fafbf0db13cb7fd0b6d76e2aebad91f3c1b957278e\"" Apr 30 03:47:48.991436 systemd[1]: Started cri-containerd-e6c6b654f123f7f2524a40fafbf0db13cb7fd0b6d76e2aebad91f3c1b957278e.scope - libcontainer container e6c6b654f123f7f2524a40fafbf0db13cb7fd0b6d76e2aebad91f3c1b957278e. Apr 30 03:47:49.020957 containerd[1500]: time="2025-04-30T03:47:49.020919500Z" level=info msg="StartContainer for \"e6c6b654f123f7f2524a40fafbf0db13cb7fd0b6d76e2aebad91f3c1b957278e\" returns successfully" Apr 30 03:47:49.304403 containerd[1500]: time="2025-04-30T03:47:49.304260116Z" level=info msg="StopPodSandbox for \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\"" Apr 30 03:47:49.304701 containerd[1500]: time="2025-04-30T03:47:49.304382937Z" level=info msg="StopPodSandbox for \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\"" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.378 [INFO][4353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.378 [INFO][4353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" iface="eth0" netns="/var/run/netns/cni-f6e93794-d8d9-46b0-9081-029baef0a28f" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.378 [INFO][4353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" iface="eth0" netns="/var/run/netns/cni-f6e93794-d8d9-46b0-9081-029baef0a28f" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.379 [INFO][4353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" iface="eth0" netns="/var/run/netns/cni-f6e93794-d8d9-46b0-9081-029baef0a28f" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.379 [INFO][4353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.379 [INFO][4353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.420 [INFO][4363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.420 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.420 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.435 [WARNING][4363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.435 [INFO][4363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.440 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:49.446845 containerd[1500]: 2025-04-30 03:47:49.444 [INFO][4353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:47:49.449504 containerd[1500]: time="2025-04-30T03:47:49.448088169Z" level=info msg="TearDown network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\" successfully" Apr 30 03:47:49.449504 containerd[1500]: time="2025-04-30T03:47:49.448113145Z" level=info msg="StopPodSandbox for \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\" returns successfully" Apr 30 03:47:49.449937 containerd[1500]: time="2025-04-30T03:47:49.449906408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kshpv,Uid:91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb,Namespace:calico-system,Attempt:1,}" Apr 30 03:47:49.462028 systemd[1]: run-netns-cni\x2df6e93794\x2dd8d9\x2d46b0\x2d9081\x2d029baef0a28f.mount: Deactivated successfully. Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.448 [INFO][4347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.450 [INFO][4347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" iface="eth0" netns="/var/run/netns/cni-156c45ed-db22-7875-4a59-11b1adf4ccc7" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.450 [INFO][4347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" iface="eth0" netns="/var/run/netns/cni-156c45ed-db22-7875-4a59-11b1adf4ccc7" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.450 [INFO][4347] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" iface="eth0" netns="/var/run/netns/cni-156c45ed-db22-7875-4a59-11b1adf4ccc7" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.451 [INFO][4347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.451 [INFO][4347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.526 [INFO][4371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.527 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.527 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.538 [WARNING][4371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.538 [INFO][4371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.542 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:49.558501 containerd[1500]: 2025-04-30 03:47:49.549 [INFO][4347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:47:49.560989 containerd[1500]: time="2025-04-30T03:47:49.558590775Z" level=info msg="TearDown network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\" successfully" Apr 30 03:47:49.560989 containerd[1500]: time="2025-04-30T03:47:49.558613298Z" level=info msg="StopPodSandbox for \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\" returns successfully" Apr 30 03:47:49.560989 containerd[1500]: time="2025-04-30T03:47:49.559623701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-584b945cdb-9ntmp,Uid:2d5e3e80-1cba-451a-b60d-21a267c83978,Namespace:calico-system,Attempt:1,}" Apr 30 03:47:49.563184 systemd[1]: run-netns-cni\x2d156c45ed\x2ddb22\x2d7875\x2d4a59\x2d11b1adf4ccc7.mount: Deactivated successfully. Apr 30 03:47:49.678624 kubelet[2723]: I0430 03:47:49.678567 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8bcmd" podStartSLOduration=32.678544428 podStartE2EDuration="32.678544428s" podCreationTimestamp="2025-04-30 03:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:49.649102041 +0000 UTC m=+37.483338648" watchObservedRunningTime="2025-04-30 03:47:49.678544428 +0000 UTC m=+37.512781036" Apr 30 03:47:49.840603 systemd-networkd[1397]: cali996f9453644: Link UP Apr 30 03:47:49.841752 systemd-networkd[1397]: cali996f9453644: Gained carrier Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.576 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0 csi-node-driver- calico-system 91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb 781 0 2025-04-30 03:47:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-b-745f04f342 csi-node-driver-kshpv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali996f9453644 [] []}} ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Namespace="calico-system" Pod="csi-node-driver-kshpv" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.576 [INFO][4379] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Namespace="calico-system" Pod="csi-node-driver-kshpv" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.668 [INFO][4400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" HandleID="k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.787 [INFO][4400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" HandleID="k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003357e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-b-745f04f342", "pod":"csi-node-driver-kshpv", "timestamp":"2025-04-30 03:47:49.666286979 +0000 UTC"}, Hostname:"ci-4081-3-3-b-745f04f342", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.787 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.787 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.787 [INFO][4400] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-b-745f04f342' Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.791 [INFO][4400] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.796 [INFO][4400] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.804 [INFO][4400] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.807 [INFO][4400] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.810 [INFO][4400] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.810 [INFO][4400] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.813 [INFO][4400] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910 Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.819 [INFO][4400] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.831 [INFO][4400] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.196/26] block=192.168.94.192/26 handle="k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.832 [INFO][4400] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.196/26] handle="k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.832 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:49.873328 containerd[1500]: 2025-04-30 03:47:49.832 [INFO][4400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.196/26] IPv6=[] ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" HandleID="k8s-pod-network.48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.875358 containerd[1500]: 2025-04-30 03:47:49.837 [INFO][4379] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Namespace="calico-system" Pod="csi-node-driver-kshpv" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"", Pod:"csi-node-driver-kshpv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali996f9453644", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:49.875358 containerd[1500]: 2025-04-30 03:47:49.837 [INFO][4379] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.196/32] ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Namespace="calico-system" Pod="csi-node-driver-kshpv" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.875358 containerd[1500]: 2025-04-30 03:47:49.837 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali996f9453644 ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Namespace="calico-system" Pod="csi-node-driver-kshpv" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.875358 containerd[1500]: 2025-04-30 03:47:49.841 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Namespace="calico-system" Pod="csi-node-driver-kshpv" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.875358 containerd[1500]: 2025-04-30 03:47:49.843 [INFO][4379] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Namespace="calico-system" Pod="csi-node-driver-kshpv" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910", Pod:"csi-node-driver-kshpv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali996f9453644", MAC:"fe:b6:d1:78:40:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:49.875358 containerd[1500]: 2025-04-30 03:47:49.864 [INFO][4379] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910" Namespace="calico-system" Pod="csi-node-driver-kshpv" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:47:49.906014 systemd-networkd[1397]: cali557b782bde5: Gained IPv6LL Apr 30 03:47:49.925269 containerd[1500]: time="2025-04-30T03:47:49.924946853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:49.925269 containerd[1500]: time="2025-04-30T03:47:49.925008720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:49.926266 containerd[1500]: time="2025-04-30T03:47:49.925019810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:49.926516 containerd[1500]: time="2025-04-30T03:47:49.926383347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:49.963852 systemd[1]: Started cri-containerd-48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910.scope - libcontainer container 48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910. Apr 30 03:47:49.978127 systemd-networkd[1397]: caliad2ca3cf9b8: Link UP Apr 30 03:47:49.980600 systemd-networkd[1397]: caliad2ca3cf9b8: Gained carrier Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.671 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0 calico-kube-controllers-584b945cdb- calico-system 2d5e3e80-1cba-451a-b60d-21a267c83978 782 0 2025-04-30 03:47:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:584b945cdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-b-745f04f342 calico-kube-controllers-584b945cdb-9ntmp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliad2ca3cf9b8 [] []}} ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Namespace="calico-system" Pod="calico-kube-controllers-584b945cdb-9ntmp" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.672 [INFO][4391] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Namespace="calico-system" Pod="calico-kube-controllers-584b945cdb-9ntmp" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.721 [INFO][4413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" HandleID="k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.790 [INFO][4413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" HandleID="k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edc70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-b-745f04f342", "pod":"calico-kube-controllers-584b945cdb-9ntmp", "timestamp":"2025-04-30 03:47:49.721621959 +0000 UTC"}, Hostname:"ci-4081-3-3-b-745f04f342", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.790 [INFO][4413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.832 [INFO][4413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.832 [INFO][4413] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-b-745f04f342' Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.893 [INFO][4413] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.904 [INFO][4413] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.925 [INFO][4413] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.928 [INFO][4413] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.933 [INFO][4413] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.934 [INFO][4413] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.937 [INFO][4413] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.946 [INFO][4413] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.957 [INFO][4413] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.197/26] block=192.168.94.192/26 handle="k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.957 [INFO][4413] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.197/26] handle="k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.957 [INFO][4413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:50.011733 containerd[1500]: 2025-04-30 03:47:49.957 [INFO][4413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.197/26] IPv6=[] ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" HandleID="k8s-pod-network.63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:50.012590 containerd[1500]: 2025-04-30 03:47:49.967 [INFO][4391] cni-plugin/k8s.go 386: Populated endpoint ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Namespace="calico-system" Pod="calico-kube-controllers-584b945cdb-9ntmp" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0", GenerateName:"calico-kube-controllers-584b945cdb-", Namespace:"calico-system", SelfLink:"", UID:"2d5e3e80-1cba-451a-b60d-21a267c83978", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"584b945cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"", Pod:"calico-kube-controllers-584b945cdb-9ntmp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad2ca3cf9b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:50.012590 containerd[1500]: 2025-04-30 03:47:49.967 [INFO][4391] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.197/32] ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Namespace="calico-system" Pod="calico-kube-controllers-584b945cdb-9ntmp" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:50.012590 containerd[1500]: 2025-04-30 03:47:49.967 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad2ca3cf9b8 ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Namespace="calico-system" Pod="calico-kube-controllers-584b945cdb-9ntmp" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:50.012590 containerd[1500]: 2025-04-30 03:47:49.981 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Namespace="calico-system" Pod="calico-kube-controllers-584b945cdb-9ntmp" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:50.012590 containerd[1500]: 2025-04-30 03:47:49.988 [INFO][4391] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Namespace="calico-system" Pod="calico-kube-controllers-584b945cdb-9ntmp" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0", GenerateName:"calico-kube-controllers-584b945cdb-", Namespace:"calico-system", SelfLink:"", UID:"2d5e3e80-1cba-451a-b60d-21a267c83978", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"584b945cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d", Pod:"calico-kube-controllers-584b945cdb-9ntmp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad2ca3cf9b8", MAC:"0e:79:62:50:d1:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:50.012590 containerd[1500]: 2025-04-30 03:47:50.005 [INFO][4391] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d" Namespace="calico-system" Pod="calico-kube-controllers-584b945cdb-9ntmp" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:47:50.049810 containerd[1500]: time="2025-04-30T03:47:50.048803976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kshpv,Uid:91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910\"" Apr 30 03:47:50.075162 containerd[1500]: time="2025-04-30T03:47:50.074925277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:50.075162 containerd[1500]: time="2025-04-30T03:47:50.074983266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:50.075162 containerd[1500]: time="2025-04-30T03:47:50.074993525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:50.075162 containerd[1500]: time="2025-04-30T03:47:50.075063456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:50.096831 systemd[1]: Started cri-containerd-63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d.scope - libcontainer container 63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d. Apr 30 03:47:50.140393 containerd[1500]: time="2025-04-30T03:47:50.140262830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-584b945cdb-9ntmp,Uid:2d5e3e80-1cba-451a-b60d-21a267c83978,Namespace:calico-system,Attempt:1,} returns sandbox id \"63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d\"" Apr 30 03:47:50.305263 containerd[1500]: time="2025-04-30T03:47:50.304630440Z" level=info msg="StopPodSandbox for \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\"" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.367 [INFO][4553] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.368 [INFO][4553] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" iface="eth0" netns="/var/run/netns/cni-fdfcf729-01c7-0b56-ad25-4841bbf2bf77" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.370 [INFO][4553] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" iface="eth0" netns="/var/run/netns/cni-fdfcf729-01c7-0b56-ad25-4841bbf2bf77" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.370 [INFO][4553] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" iface="eth0" netns="/var/run/netns/cni-fdfcf729-01c7-0b56-ad25-4841bbf2bf77" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.370 [INFO][4553] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.370 [INFO][4553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.408 [INFO][4560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.408 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.408 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.415 [WARNING][4560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.415 [INFO][4560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.416 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:50.420144 containerd[1500]: 2025-04-30 03:47:50.418 [INFO][4553] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:47:50.421113 containerd[1500]: time="2025-04-30T03:47:50.420318648Z" level=info msg="TearDown network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\" successfully" Apr 30 03:47:50.421113 containerd[1500]: time="2025-04-30T03:47:50.420342623Z" level=info msg="StopPodSandbox for \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\" returns successfully" Apr 30 03:47:50.421443 containerd[1500]: time="2025-04-30T03:47:50.421249153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6c5cb,Uid:2c604747-1402-4ef2-98f9-261b059a82b6,Namespace:kube-system,Attempt:1,}" Apr 30 03:47:50.464841 systemd[1]: run-netns-cni\x2dfdfcf729\x2d01c7\x2d0b56\x2dad25\x2d4841bbf2bf77.mount: Deactivated successfully. Apr 30 03:47:50.482034 systemd-networkd[1397]: cali1e2af2486eb: Gained IPv6LL Apr 30 03:47:50.570588 systemd-networkd[1397]: cali7199a5d04be: Link UP Apr 30 03:47:50.571855 systemd-networkd[1397]: cali7199a5d04be: Gained carrier Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.480 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0 coredns-668d6bf9bc- kube-system 2c604747-1402-4ef2-98f9-261b059a82b6 803 0 2025-04-30 03:47:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-b-745f04f342 coredns-668d6bf9bc-6c5cb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7199a5d04be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Namespace="kube-system" Pod="coredns-668d6bf9bc-6c5cb" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.480 [INFO][4567] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Namespace="kube-system" Pod="coredns-668d6bf9bc-6c5cb" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.518 [INFO][4579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" HandleID="k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.531 [INFO][4579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" HandleID="k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290800), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-b-745f04f342", "pod":"coredns-668d6bf9bc-6c5cb", "timestamp":"2025-04-30 03:47:50.518089785 +0000 UTC"}, Hostname:"ci-4081-3-3-b-745f04f342", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.531 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.532 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.532 [INFO][4579] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-b-745f04f342' Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.535 [INFO][4579] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.540 [INFO][4579] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.545 [INFO][4579] ipam/ipam.go 489: Trying affinity for 192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.547 [INFO][4579] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.549 [INFO][4579] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.549 [INFO][4579] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.550 [INFO][4579] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873 Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.554 [INFO][4579] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.562 [INFO][4579] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.198/26] block=192.168.94.192/26 handle="k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.562 [INFO][4579] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.198/26] handle="k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" host="ci-4081-3-3-b-745f04f342" Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.563 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:50.590621 containerd[1500]: 2025-04-30 03:47:50.563 [INFO][4579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.198/26] IPv6=[] ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" HandleID="k8s-pod-network.b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.591350 containerd[1500]: 2025-04-30 03:47:50.565 [INFO][4567] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Namespace="kube-system" Pod="coredns-668d6bf9bc-6c5cb" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c604747-1402-4ef2-98f9-261b059a82b6", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"", Pod:"coredns-668d6bf9bc-6c5cb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7199a5d04be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:50.591350 containerd[1500]: 2025-04-30 03:47:50.565 [INFO][4567] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.198/32] ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Namespace="kube-system" Pod="coredns-668d6bf9bc-6c5cb" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.591350 containerd[1500]: 2025-04-30 03:47:50.565 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7199a5d04be ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Namespace="kube-system" Pod="coredns-668d6bf9bc-6c5cb" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.591350 containerd[1500]: 2025-04-30 03:47:50.567 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Namespace="kube-system" Pod="coredns-668d6bf9bc-6c5cb" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.591350 containerd[1500]: 2025-04-30 03:47:50.567 [INFO][4567] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Namespace="kube-system" Pod="coredns-668d6bf9bc-6c5cb" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c604747-1402-4ef2-98f9-261b059a82b6", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873", Pod:"coredns-668d6bf9bc-6c5cb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7199a5d04be", MAC:"36:86:af:53:a7:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:50.591350 containerd[1500]: 2025-04-30 03:47:50.583 [INFO][4567] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873" Namespace="kube-system" Pod="coredns-668d6bf9bc-6c5cb" WorkloadEndpoint="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:47:50.619843 containerd[1500]: time="2025-04-30T03:47:50.619439613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:50.619843 containerd[1500]: time="2025-04-30T03:47:50.619488073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:50.619843 containerd[1500]: time="2025-04-30T03:47:50.619501348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:50.619843 containerd[1500]: time="2025-04-30T03:47:50.619593121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:50.651197 systemd[1]: Started cri-containerd-b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873.scope - libcontainer container b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873. Apr 30 03:47:50.695968 containerd[1500]: time="2025-04-30T03:47:50.695809377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6c5cb,Uid:2c604747-1402-4ef2-98f9-261b059a82b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873\"" Apr 30 03:47:50.704644 containerd[1500]: time="2025-04-30T03:47:50.704508337Z" level=info msg="CreateContainer within sandbox \"b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:47:50.712835 containerd[1500]: time="2025-04-30T03:47:50.712795613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:50.714008 containerd[1500]: time="2025-04-30T03:47:50.713972901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:47:50.719130 containerd[1500]: time="2025-04-30T03:47:50.717925500Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:50.720544 containerd[1500]: time="2025-04-30T03:47:50.720523682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:50.721556 containerd[1500]: time="2025-04-30T03:47:50.721537612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.784598441s" Apr 30 03:47:50.721632 containerd[1500]: time="2025-04-30T03:47:50.721621048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:47:50.724733 containerd[1500]: time="2025-04-30T03:47:50.724709018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:47:50.726789 containerd[1500]: time="2025-04-30T03:47:50.725383462Z" level=info msg="CreateContainer within sandbox \"6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:47:50.741390 containerd[1500]: time="2025-04-30T03:47:50.741351269Z" level=info msg="CreateContainer within sandbox \"b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ecf49918b1c5a3c7a77333002843d5ed2c66eef3f2867a3b93832e0279e8694\"" Apr 30 03:47:50.742203 containerd[1500]: time="2025-04-30T03:47:50.742098719Z" level=info msg="StartContainer for \"8ecf49918b1c5a3c7a77333002843d5ed2c66eef3f2867a3b93832e0279e8694\"" Apr 30 03:47:50.750943 containerd[1500]: time="2025-04-30T03:47:50.750904097Z" level=info msg="CreateContainer within sandbox \"6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9d0543e74bb58d2d0506a01fd8935f55049333be0f6691163e247a80fe771717\"" Apr 30 03:47:50.752905 containerd[1500]: time="2025-04-30T03:47:50.752874181Z" level=info msg="StartContainer for \"9d0543e74bb58d2d0506a01fd8935f55049333be0f6691163e247a80fe771717\"" Apr 30 03:47:50.777443 systemd[1]: Started cri-containerd-8ecf49918b1c5a3c7a77333002843d5ed2c66eef3f2867a3b93832e0279e8694.scope - libcontainer container 8ecf49918b1c5a3c7a77333002843d5ed2c66eef3f2867a3b93832e0279e8694. Apr 30 03:47:50.788162 systemd[1]: Started cri-containerd-9d0543e74bb58d2d0506a01fd8935f55049333be0f6691163e247a80fe771717.scope - libcontainer container 9d0543e74bb58d2d0506a01fd8935f55049333be0f6691163e247a80fe771717. Apr 30 03:47:50.822049 containerd[1500]: time="2025-04-30T03:47:50.821948097Z" level=info msg="StartContainer for \"8ecf49918b1c5a3c7a77333002843d5ed2c66eef3f2867a3b93832e0279e8694\" returns successfully" Apr 30 03:47:50.855382 containerd[1500]: time="2025-04-30T03:47:50.855333768Z" level=info msg="StartContainer for \"9d0543e74bb58d2d0506a01fd8935f55049333be0f6691163e247a80fe771717\" returns successfully" Apr 30 03:47:51.121952 systemd-networkd[1397]: cali996f9453644: Gained IPv6LL Apr 30 03:47:51.187714 containerd[1500]: time="2025-04-30T03:47:51.186541556Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:51.188497 containerd[1500]: time="2025-04-30T03:47:51.188436738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:47:51.190207 containerd[1500]: time="2025-04-30T03:47:51.190173273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 465.436384ms" Apr 30 03:47:51.190207 containerd[1500]: time="2025-04-30T03:47:51.190201637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:47:51.191416 containerd[1500]: time="2025-04-30T03:47:51.191390235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:47:51.194231 containerd[1500]: time="2025-04-30T03:47:51.193650974Z" level=info msg="CreateContainer within sandbox \"99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:47:51.222055 containerd[1500]: time="2025-04-30T03:47:51.222000673Z" level=info msg="CreateContainer within sandbox \"99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6ad9e29c44dbba3f8ab207339f962f6c4c0b3127051c4fb45f5d26aba1f20fcd\"" Apr 30 03:47:51.224548 containerd[1500]: time="2025-04-30T03:47:51.224513052Z" level=info msg="StartContainer for \"6ad9e29c44dbba3f8ab207339f962f6c4c0b3127051c4fb45f5d26aba1f20fcd\"" Apr 30 03:47:51.258815 systemd[1]: Started cri-containerd-6ad9e29c44dbba3f8ab207339f962f6c4c0b3127051c4fb45f5d26aba1f20fcd.scope - libcontainer container 6ad9e29c44dbba3f8ab207339f962f6c4c0b3127051c4fb45f5d26aba1f20fcd. Apr 30 03:47:51.299734 containerd[1500]: time="2025-04-30T03:47:51.299660938Z" level=info msg="StartContainer for \"6ad9e29c44dbba3f8ab207339f962f6c4c0b3127051c4fb45f5d26aba1f20fcd\" returns successfully" Apr 30 03:47:51.463770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843648493.mount: Deactivated successfully. Apr 30 03:47:51.569813 systemd-networkd[1397]: caliad2ca3cf9b8: Gained IPv6LL Apr 30 03:47:51.634241 kubelet[2723]: I0430 03:47:51.634125 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d565cbf-zxwkz" podStartSLOduration=24.26298975 podStartE2EDuration="26.634100945s" podCreationTimestamp="2025-04-30 03:47:25 +0000 UTC" firstStartedPulling="2025-04-30 03:47:48.81997019 +0000 UTC m=+36.654206798" lastFinishedPulling="2025-04-30 03:47:51.191081387 +0000 UTC m=+39.025317993" observedRunningTime="2025-04-30 03:47:51.617527744 +0000 UTC m=+39.451764351" watchObservedRunningTime="2025-04-30 03:47:51.634100945 +0000 UTC m=+39.468337562" Apr 30 03:47:51.634665 kubelet[2723]: I0430 03:47:51.634458 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6c5cb" podStartSLOduration=34.634449108 podStartE2EDuration="34.634449108s" podCreationTimestamp="2025-04-30 03:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:51.632943756 +0000 UTC m=+39.467180373" watchObservedRunningTime="2025-04-30 03:47:51.634449108 +0000 UTC m=+39.468685715" Apr 30 03:47:52.019114 systemd-networkd[1397]: cali7199a5d04be: Gained IPv6LL Apr 30 03:47:52.382652 kubelet[2723]: I0430 03:47:52.382458 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:52.464568 systemd[1]: run-containerd-runc-k8s.io-29197d1caa13c2b7d41914bd106f280be7f6e5706959bc8c00792a65025e6067-runc.mRfNwD.mount: Deactivated successfully. Apr 30 03:47:52.586476 kubelet[2723]: I0430 03:47:52.586396 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d565cbf-48sql" podStartSLOduration=23.799352926 podStartE2EDuration="27.586371414s" podCreationTimestamp="2025-04-30 03:47:25 +0000 UTC" firstStartedPulling="2025-04-30 03:47:46.93604733 +0000 UTC m=+34.770283937" lastFinishedPulling="2025-04-30 03:47:50.723065817 +0000 UTC m=+38.557302425" observedRunningTime="2025-04-30 03:47:51.650979679 +0000 UTC m=+39.485216286" watchObservedRunningTime="2025-04-30 03:47:52.586371414 +0000 UTC m=+40.420608021" Apr 30 03:47:52.621711 kubelet[2723]: I0430 03:47:52.621221 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:52.621711 kubelet[2723]: I0430 03:47:52.621618 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:53.171264 containerd[1500]: time="2025-04-30T03:47:53.171207370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:53.172427 containerd[1500]: time="2025-04-30T03:47:53.172282295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:47:53.175123 containerd[1500]: time="2025-04-30T03:47:53.173917150Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:53.176782 containerd[1500]: time="2025-04-30T03:47:53.176539817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:53.177252 containerd[1500]: time="2025-04-30T03:47:53.177219262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.985801975s" Apr 30 03:47:53.177335 containerd[1500]: time="2025-04-30T03:47:53.177256521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:47:53.178686 containerd[1500]: time="2025-04-30T03:47:53.178613785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:47:53.181485 containerd[1500]: time="2025-04-30T03:47:53.181450494Z" level=info msg="CreateContainer within sandbox \"48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:47:53.202802 containerd[1500]: time="2025-04-30T03:47:53.202730639Z" level=info msg="CreateContainer within sandbox \"48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f25bd16da7b2722a57b56e43443276c5461fce1260b62f6332dbed1e70583154\"" Apr 30 03:47:53.203562 containerd[1500]: time="2025-04-30T03:47:53.203515230Z" level=info msg="StartContainer for \"f25bd16da7b2722a57b56e43443276c5461fce1260b62f6332dbed1e70583154\"" Apr 30 03:47:53.242156 systemd[1]: Started cri-containerd-f25bd16da7b2722a57b56e43443276c5461fce1260b62f6332dbed1e70583154.scope - libcontainer container f25bd16da7b2722a57b56e43443276c5461fce1260b62f6332dbed1e70583154. Apr 30 03:47:53.269018 containerd[1500]: time="2025-04-30T03:47:53.268895314Z" level=info msg="StartContainer for \"f25bd16da7b2722a57b56e43443276c5461fce1260b62f6332dbed1e70583154\" returns successfully" Apr 30 03:47:55.248192 containerd[1500]: time="2025-04-30T03:47:55.248135698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:55.249425 containerd[1500]: time="2025-04-30T03:47:55.249325548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:47:55.250840 containerd[1500]: time="2025-04-30T03:47:55.250644291Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:55.252982 containerd[1500]: time="2025-04-30T03:47:55.252963339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:55.253415 containerd[1500]: time="2025-04-30T03:47:55.253383918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.074297687s" Apr 30 03:47:55.253453 containerd[1500]: time="2025-04-30T03:47:55.253420366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:47:55.262787 containerd[1500]: time="2025-04-30T03:47:55.262753774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:47:55.277150 containerd[1500]: time="2025-04-30T03:47:55.277040308Z" level=info msg="CreateContainer within sandbox \"63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:47:55.303382 containerd[1500]: time="2025-04-30T03:47:55.302730352Z" level=info msg="CreateContainer within sandbox \"63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048\"" Apr 30 03:47:55.303493 containerd[1500]: time="2025-04-30T03:47:55.303457404Z" level=info msg="StartContainer for \"21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048\"" Apr 30 03:47:55.347838 systemd[1]: Started cri-containerd-21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048.scope - libcontainer container 21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048. Apr 30 03:47:55.428785 containerd[1500]: time="2025-04-30T03:47:55.428713072Z" level=info msg="StartContainer for \"21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048\" returns successfully" Apr 30 03:47:56.664395 kubelet[2723]: I0430 03:47:56.664253 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:58.241508 containerd[1500]: time="2025-04-30T03:47:58.241436378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:58.242855 containerd[1500]: time="2025-04-30T03:47:58.242774356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:47:58.244583 containerd[1500]: time="2025-04-30T03:47:58.244536310Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:58.256186 containerd[1500]: time="2025-04-30T03:47:58.256121279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:58.257496 containerd[1500]: time="2025-04-30T03:47:58.257083434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.994289224s" Apr 30 03:47:58.257496 containerd[1500]: time="2025-04-30T03:47:58.257142925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:47:58.261249 containerd[1500]: time="2025-04-30T03:47:58.261196414Z" level=info msg="CreateContainer within sandbox \"48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:47:58.279001 containerd[1500]: time="2025-04-30T03:47:58.278919922Z" level=info msg="CreateContainer within sandbox \"48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ef2ffb072002838283e9c4c11ade79b7f81876bea036446f85ef6b01e8c1d2ab\"" Apr 30 03:47:58.279838 containerd[1500]: time="2025-04-30T03:47:58.279810312Z" level=info msg="StartContainer for \"ef2ffb072002838283e9c4c11ade79b7f81876bea036446f85ef6b01e8c1d2ab\"" Apr 30 03:47:58.325833 systemd[1]: Started cri-containerd-ef2ffb072002838283e9c4c11ade79b7f81876bea036446f85ef6b01e8c1d2ab.scope - libcontainer container ef2ffb072002838283e9c4c11ade79b7f81876bea036446f85ef6b01e8c1d2ab. Apr 30 03:47:58.365778 containerd[1500]: time="2025-04-30T03:47:58.365722450Z" level=info msg="StartContainer for \"ef2ffb072002838283e9c4c11ade79b7f81876bea036446f85ef6b01e8c1d2ab\" returns successfully" Apr 30 03:47:58.604210 kubelet[2723]: I0430 03:47:58.604011 2723 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:47:58.608042 kubelet[2723]: I0430 03:47:58.607975 2723 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:47:58.713286 kubelet[2723]: I0430 03:47:58.713042 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-584b945cdb-9ntmp" podStartSLOduration=28.593493335 podStartE2EDuration="33.713017078s" podCreationTimestamp="2025-04-30 03:47:25 +0000 UTC" firstStartedPulling="2025-04-30 03:47:50.143028755 +0000 UTC m=+37.977265361" lastFinishedPulling="2025-04-30 03:47:55.262552496 +0000 UTC m=+43.096789104" observedRunningTime="2025-04-30 03:47:55.677925784 +0000 UTC m=+43.512162411" watchObservedRunningTime="2025-04-30 03:47:58.713017078 +0000 UTC m=+46.547253704" Apr 30 03:48:01.124746 kubelet[2723]: I0430 03:48:01.123615 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:48:01.145912 systemd[1]: run-containerd-runc-k8s.io-21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048-runc.gUhHZF.mount: Deactivated successfully. Apr 30 03:48:01.210833 kubelet[2723]: I0430 03:48:01.210572 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kshpv" podStartSLOduration=28.014114539 podStartE2EDuration="36.21054964s" podCreationTimestamp="2025-04-30 03:47:25 +0000 UTC" firstStartedPulling="2025-04-30 03:47:50.061737373 +0000 UTC m=+37.895973980" lastFinishedPulling="2025-04-30 03:47:58.258172475 +0000 UTC m=+46.092409081" observedRunningTime="2025-04-30 03:47:58.714844885 +0000 UTC m=+46.549081512" watchObservedRunningTime="2025-04-30 03:48:01.21054964 +0000 UTC m=+49.044786247" Apr 30 03:48:08.449090 kubelet[2723]: I0430 03:48:08.448411 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:48:12.305987 containerd[1500]: time="2025-04-30T03:48:12.305928730Z" level=info msg="StopPodSandbox for \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\"" Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.486 [WARNING][5008] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e9de684-fac2-4edf-bed1-82122c48751b", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c", Pod:"coredns-668d6bf9bc-8bcmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali557b782bde5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.489 [INFO][5008] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.489 [INFO][5008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" iface="eth0" netns="" Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.489 [INFO][5008] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.489 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.517 [INFO][5015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.517 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.517 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.527 [WARNING][5015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.527 [INFO][5015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.529 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:12.533707 containerd[1500]: 2025-04-30 03:48:12.531 [INFO][5008] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:48:12.536904 containerd[1500]: time="2025-04-30T03:48:12.533804572Z" level=info msg="TearDown network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\" successfully" Apr 30 03:48:12.536904 containerd[1500]: time="2025-04-30T03:48:12.533841292Z" level=info msg="StopPodSandbox for \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\" returns successfully" Apr 30 03:48:12.556960 containerd[1500]: time="2025-04-30T03:48:12.556796479Z" level=info msg="RemovePodSandbox for \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\"" Apr 30 03:48:12.584856 containerd[1500]: time="2025-04-30T03:48:12.584775375Z" level=info msg="Forcibly stopping sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\"" Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.638 [WARNING][5033] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e9de684-fac2-4edf-bed1-82122c48751b", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"c4159dc0f7d432178be29f70d91307dadb4a0637af89834cc0d112c0f2e84e7c", Pod:"coredns-668d6bf9bc-8bcmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali557b782bde5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.638 [INFO][5033] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.638 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" iface="eth0" netns="" Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.639 [INFO][5033] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.639 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.674 [INFO][5040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.674 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.674 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.688 [WARNING][5040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.688 [INFO][5040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" HandleID="k8s-pod-network.0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--8bcmd-eth0" Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.691 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:12.700896 containerd[1500]: 2025-04-30 03:48:12.696 [INFO][5033] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915" Apr 30 03:48:12.702184 containerd[1500]: time="2025-04-30T03:48:12.700921882Z" level=info msg="TearDown network for sandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\" successfully" Apr 30 03:48:12.740981 containerd[1500]: time="2025-04-30T03:48:12.740711191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:12.753028 containerd[1500]: time="2025-04-30T03:48:12.752977728Z" level=info msg="RemovePodSandbox \"0105ea9d4dde2c941afd9ee04fabdca7da688b505f4ee53a9986a25cf94cb915\" returns successfully" Apr 30 03:48:12.766501 containerd[1500]: time="2025-04-30T03:48:12.766426834Z" level=info msg="StopPodSandbox for \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\"" Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.819 [WARNING][5059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0", GenerateName:"calico-apiserver-55d565cbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c3f5153-aeae-460f-bcd5-59ddd7a16065", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d565cbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd", Pod:"calico-apiserver-55d565cbf-zxwkz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2af2486eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.819 [INFO][5059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.819 [INFO][5059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" iface="eth0" netns="" Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.819 [INFO][5059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.819 [INFO][5059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.851 [INFO][5067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.851 [INFO][5067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.852 [INFO][5067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.860 [WARNING][5067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.861 [INFO][5067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.862 [INFO][5067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:12.867244 containerd[1500]: 2025-04-30 03:48:12.864 [INFO][5059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:48:12.867244 containerd[1500]: time="2025-04-30T03:48:12.867183878Z" level=info msg="TearDown network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\" successfully" Apr 30 03:48:12.867244 containerd[1500]: time="2025-04-30T03:48:12.867209556Z" level=info msg="StopPodSandbox for \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\" returns successfully" Apr 30 03:48:12.884376 containerd[1500]: time="2025-04-30T03:48:12.884324494Z" level=info msg="RemovePodSandbox for \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\"" Apr 30 03:48:12.884468 containerd[1500]: time="2025-04-30T03:48:12.884380669Z" level=info msg="Forcibly stopping sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\"" Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:12.961 [WARNING][5085] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0", GenerateName:"calico-apiserver-55d565cbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c3f5153-aeae-460f-bcd5-59ddd7a16065", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d565cbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"99736c38dc4f1c3d7ce7df23638d8b8eee563aa3d187b8882a3718c052ef35dd", Pod:"calico-apiserver-55d565cbf-zxwkz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2af2486eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:12.961 [INFO][5085] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:12.961 [INFO][5085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" iface="eth0" netns="" Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:12.961 [INFO][5085] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:12.961 [INFO][5085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:12.997 [INFO][5093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:12.997 [INFO][5093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:12.997 [INFO][5093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:13.006 [WARNING][5093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:13.006 [INFO][5093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" HandleID="k8s-pod-network.3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--zxwkz-eth0" Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:13.008 [INFO][5093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.012431 containerd[1500]: 2025-04-30 03:48:13.010 [INFO][5085] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f" Apr 30 03:48:13.012431 containerd[1500]: time="2025-04-30T03:48:13.012375299Z" level=info msg="TearDown network for sandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\" successfully" Apr 30 03:48:13.031836 containerd[1500]: time="2025-04-30T03:48:13.031680514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:13.031836 containerd[1500]: time="2025-04-30T03:48:13.031774520Z" level=info msg="RemovePodSandbox \"3a54cd17a2561987fe4aa02d67a214868903b2397b2763afb0496cca8430bf2f\" returns successfully" Apr 30 03:48:13.032443 containerd[1500]: time="2025-04-30T03:48:13.032418548Z" level=info msg="StopPodSandbox for \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\"" Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.068 [WARNING][5111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c604747-1402-4ef2-98f9-261b059a82b6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873", Pod:"coredns-668d6bf9bc-6c5cb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7199a5d04be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.069 [INFO][5111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.069 [INFO][5111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" iface="eth0" netns="" Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.069 [INFO][5111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.069 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.096 [INFO][5118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.096 [INFO][5118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.096 [INFO][5118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.102 [WARNING][5118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.102 [INFO][5118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.104 [INFO][5118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.107820 containerd[1500]: 2025-04-30 03:48:13.105 [INFO][5111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:48:13.109525 containerd[1500]: time="2025-04-30T03:48:13.107861012Z" level=info msg="TearDown network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\" successfully" Apr 30 03:48:13.109525 containerd[1500]: time="2025-04-30T03:48:13.107892320Z" level=info msg="StopPodSandbox for \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\" returns successfully" Apr 30 03:48:13.109525 containerd[1500]: time="2025-04-30T03:48:13.108501932Z" level=info msg="RemovePodSandbox for \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\"" Apr 30 03:48:13.109525 containerd[1500]: time="2025-04-30T03:48:13.108532039Z" level=info msg="Forcibly stopping sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\"" Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.157 [WARNING][5137] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c604747-1402-4ef2-98f9-261b059a82b6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"b702b1ad15785fc30470faa81223dc22cdae06d001e29030d2381882beb54873", Pod:"coredns-668d6bf9bc-6c5cb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7199a5d04be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.157 [INFO][5137] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.157 [INFO][5137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" iface="eth0" netns="" Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.157 [INFO][5137] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.157 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.187 [INFO][5144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.187 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.187 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.197 [WARNING][5144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.197 [INFO][5144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" HandleID="k8s-pod-network.b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Workload="ci--4081--3--3--b--745f04f342-k8s-coredns--668d6bf9bc--6c5cb-eth0" Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.199 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.203620 containerd[1500]: 2025-04-30 03:48:13.201 [INFO][5137] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62" Apr 30 03:48:13.205072 containerd[1500]: time="2025-04-30T03:48:13.203827403Z" level=info msg="TearDown network for sandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\" successfully" Apr 30 03:48:13.209102 containerd[1500]: time="2025-04-30T03:48:13.209045938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:13.209102 containerd[1500]: time="2025-04-30T03:48:13.209095931Z" level=info msg="RemovePodSandbox \"b21bd1e8fa31c84dcaa9463c8b5fb88244145266ad3bb43a3b276b42fc2cbd62\" returns successfully" Apr 30 03:48:13.210204 containerd[1500]: time="2025-04-30T03:48:13.209881024Z" level=info msg="StopPodSandbox for \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\"" Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.264 [WARNING][5163] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0", GenerateName:"calico-apiserver-55d565cbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c67ed2a-f626-477b-8cd7-24b48436df7d", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d565cbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31", Pod:"calico-apiserver-55d565cbf-48sql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63c405e3130", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.264 [INFO][5163] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.264 [INFO][5163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" iface="eth0" netns="" Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.264 [INFO][5163] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.264 [INFO][5163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.305 [INFO][5170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.305 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.305 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.314 [WARNING][5170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.314 [INFO][5170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.316 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.320120 containerd[1500]: 2025-04-30 03:48:13.318 [INFO][5163] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:48:13.321066 containerd[1500]: time="2025-04-30T03:48:13.320222337Z" level=info msg="TearDown network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\" successfully" Apr 30 03:48:13.321066 containerd[1500]: time="2025-04-30T03:48:13.320316333Z" level=info msg="StopPodSandbox for \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\" returns successfully" Apr 30 03:48:13.321207 containerd[1500]: time="2025-04-30T03:48:13.321162429Z" level=info msg="RemovePodSandbox for \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\"" Apr 30 03:48:13.321233 containerd[1500]: time="2025-04-30T03:48:13.321215880Z" level=info msg="Forcibly stopping sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\"" Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.394 [WARNING][5190] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0", GenerateName:"calico-apiserver-55d565cbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c67ed2a-f626-477b-8cd7-24b48436df7d", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d565cbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"6ee04da061ca898722513b82c26ac46511067f53b667cf8eecf8508de39a1c31", Pod:"calico-apiserver-55d565cbf-48sql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63c405e3130", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.394 [INFO][5190] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.394 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" iface="eth0" netns="" Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.394 [INFO][5190] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.394 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.419 [INFO][5197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.419 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.419 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.429 [WARNING][5197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.429 [INFO][5197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" HandleID="k8s-pod-network.2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--apiserver--55d565cbf--48sql-eth0" Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.431 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.435423 containerd[1500]: 2025-04-30 03:48:13.433 [INFO][5190] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90" Apr 30 03:48:13.435958 containerd[1500]: time="2025-04-30T03:48:13.435467835Z" level=info msg="TearDown network for sandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\" successfully" Apr 30 03:48:13.440294 containerd[1500]: time="2025-04-30T03:48:13.440216758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:13.440471 containerd[1500]: time="2025-04-30T03:48:13.440327676Z" level=info msg="RemovePodSandbox \"2b3753c80cbfc8d088c32427111dc5b18cac693b0693b1057b8a1ac7c5912a90\" returns successfully" Apr 30 03:48:13.441169 containerd[1500]: time="2025-04-30T03:48:13.441145309Z" level=info msg="StopPodSandbox for \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\"" Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.485 [WARNING][5215] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0", GenerateName:"calico-kube-controllers-584b945cdb-", Namespace:"calico-system", SelfLink:"", UID:"2d5e3e80-1cba-451a-b60d-21a267c83978", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"584b945cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d", Pod:"calico-kube-controllers-584b945cdb-9ntmp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad2ca3cf9b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.485 [INFO][5215] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.485 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" iface="eth0" netns="" Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.485 [INFO][5215] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.485 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.507 [INFO][5223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.507 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.507 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.514 [WARNING][5223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.514 [INFO][5223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.515 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.519539 containerd[1500]: 2025-04-30 03:48:13.517 [INFO][5215] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:48:13.519539 containerd[1500]: time="2025-04-30T03:48:13.519256116Z" level=info msg="TearDown network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\" successfully" Apr 30 03:48:13.519539 containerd[1500]: time="2025-04-30T03:48:13.519302643Z" level=info msg="StopPodSandbox for \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\" returns successfully" Apr 30 03:48:13.522907 containerd[1500]: time="2025-04-30T03:48:13.519810625Z" level=info msg="RemovePodSandbox for \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\"" Apr 30 03:48:13.522907 containerd[1500]: time="2025-04-30T03:48:13.519842395Z" level=info msg="Forcibly stopping sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\"" Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.562 [WARNING][5241] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0", GenerateName:"calico-kube-controllers-584b945cdb-", Namespace:"calico-system", SelfLink:"", UID:"2d5e3e80-1cba-451a-b60d-21a267c83978", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"584b945cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"63c9c2b03f8fecbcf310866aae5fba253ee18b549d13c271564ce1ee05c2906d", Pod:"calico-kube-controllers-584b945cdb-9ntmp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad2ca3cf9b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.562 [INFO][5241] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.563 [INFO][5241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" iface="eth0" netns="" Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.563 [INFO][5241] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.563 [INFO][5241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.587 [INFO][5248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.588 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.588 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.596 [WARNING][5248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.596 [INFO][5248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" HandleID="k8s-pod-network.713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Workload="ci--4081--3--3--b--745f04f342-k8s-calico--kube--controllers--584b945cdb--9ntmp-eth0" Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.599 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.604768 containerd[1500]: 2025-04-30 03:48:13.601 [INFO][5241] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a" Apr 30 03:48:13.604768 containerd[1500]: time="2025-04-30T03:48:13.604316821Z" level=info msg="TearDown network for sandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\" successfully" Apr 30 03:48:13.614840 containerd[1500]: time="2025-04-30T03:48:13.614769478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:13.614956 containerd[1500]: time="2025-04-30T03:48:13.614849619Z" level=info msg="RemovePodSandbox \"713eb4371500863a367122dac273266c139b903891781f048fc4c66181395e4a\" returns successfully" Apr 30 03:48:13.615543 containerd[1500]: time="2025-04-30T03:48:13.615520406Z" level=info msg="StopPodSandbox for \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\"" Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.657 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910", Pod:"csi-node-driver-kshpv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali996f9453644", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.658 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.658 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" iface="eth0" netns="" Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.658 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.658 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.681 [INFO][5274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.681 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.681 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.688 [WARNING][5274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.688 [INFO][5274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.691 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.695738 containerd[1500]: 2025-04-30 03:48:13.693 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:48:13.697734 containerd[1500]: time="2025-04-30T03:48:13.695955450Z" level=info msg="TearDown network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\" successfully" Apr 30 03:48:13.697734 containerd[1500]: time="2025-04-30T03:48:13.695980528Z" level=info msg="StopPodSandbox for \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\" returns successfully" Apr 30 03:48:13.697734 containerd[1500]: time="2025-04-30T03:48:13.697074188Z" level=info msg="RemovePodSandbox for \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\"" Apr 30 03:48:13.697734 containerd[1500]: time="2025-04-30T03:48:13.697097121Z" level=info msg="Forcibly stopping sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\"" Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.734 [WARNING][5293] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91a9afba-5d9a-48f2-ad03-1bd0e9fa98bb", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-b-745f04f342", ContainerID:"48871dfad159989bb2e8bbc49d244cfb7fd771a094761f8ba22ed50b43edc910", Pod:"csi-node-driver-kshpv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali996f9453644", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.734 [INFO][5293] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.734 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" iface="eth0" netns="" Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.734 [INFO][5293] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.734 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.757 [INFO][5300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.758 [INFO][5300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.758 [INFO][5300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.765 [WARNING][5300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.765 [INFO][5300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" HandleID="k8s-pod-network.1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Workload="ci--4081--3--3--b--745f04f342-k8s-csi--node--driver--kshpv-eth0" Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.768 [INFO][5300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.773456 containerd[1500]: 2025-04-30 03:48:13.770 [INFO][5293] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102" Apr 30 03:48:13.774143 containerd[1500]: time="2025-04-30T03:48:13.773434472Z" level=info msg="TearDown network for sandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\" successfully" Apr 30 03:48:13.779717 containerd[1500]: time="2025-04-30T03:48:13.779628175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:13.779717 containerd[1500]: time="2025-04-30T03:48:13.779716971Z" level=info msg="RemovePodSandbox \"1aad0257c48678a1c62056994588c742a376818b0a6dc9d968c776199dd22102\" returns successfully" Apr 30 03:48:21.988110 kubelet[2723]: I0430 03:48:21.987563 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:48:31.244588 systemd[1]: run-containerd-runc-k8s.io-21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048-runc.GWMHQd.mount: Deactivated successfully. Apr 30 03:49:01.240185 systemd[1]: run-containerd-runc-k8s.io-21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048-runc.l9LvIa.mount: Deactivated successfully. Apr 30 03:49:15.564807 systemd[1]: Started sshd@8-157.180.64.98:22-117.50.226.213:39642.service - OpenSSH per-connection server daemon (117.50.226.213:39642). Apr 30 03:49:15.894871 sshd[5435]: Connection closed by 117.50.226.213 port 39642 Apr 30 03:49:15.896218 systemd[1]: sshd@8-157.180.64.98:22-117.50.226.213:39642.service: Deactivated successfully. Apr 30 03:49:31.230554 systemd[1]: run-containerd-runc-k8s.io-21e947a5830871cf61c867b67a8f2ce229e95de0a174a68fa709c4b6bb14f048-runc.DVaipG.mount: Deactivated successfully. Apr 30 03:51:52.196028 systemd[1]: Started sshd@9-157.180.64.98:22-139.178.68.195:44404.service - OpenSSH per-connection server daemon (139.178.68.195:44404). Apr 30 03:51:53.218789 sshd[5765]: Accepted publickey for core from 139.178.68.195 port 44404 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:51:53.224052 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:51:53.244593 systemd-logind[1485]: New session 8 of user core. Apr 30 03:51:53.255740 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:51:54.679202 sshd[5765]: pam_unix(sshd:session): session closed for user core Apr 30 03:51:54.690782 systemd[1]: sshd@9-157.180.64.98:22-139.178.68.195:44404.service: Deactivated successfully. Apr 30 03:51:54.693410 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:51:54.698011 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:51:54.699143 systemd-logind[1485]: Removed session 8. Apr 30 03:51:59.857558 systemd[1]: Started sshd@10-157.180.64.98:22-139.178.68.195:48688.service - OpenSSH per-connection server daemon (139.178.68.195:48688). Apr 30 03:52:00.916564 sshd[5802]: Accepted publickey for core from 139.178.68.195 port 48688 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:00.919499 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:00.929476 systemd-logind[1485]: New session 9 of user core. Apr 30 03:52:00.934958 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:52:01.748908 sshd[5802]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:01.756856 systemd[1]: sshd@10-157.180.64.98:22-139.178.68.195:48688.service: Deactivated successfully. Apr 30 03:52:01.760529 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:52:01.761721 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:52:01.764246 systemd-logind[1485]: Removed session 9. Apr 30 03:52:01.925292 systemd[1]: Started sshd@11-157.180.64.98:22-139.178.68.195:48704.service - OpenSSH per-connection server daemon (139.178.68.195:48704). Apr 30 03:52:02.921583 sshd[5835]: Accepted publickey for core from 139.178.68.195 port 48704 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:02.924797 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:02.933947 systemd-logind[1485]: New session 10 of user core. Apr 30 03:52:02.940898 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:52:03.805610 sshd[5835]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:03.815075 systemd[1]: sshd@11-157.180.64.98:22-139.178.68.195:48704.service: Deactivated successfully. Apr 30 03:52:03.818599 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:52:03.821790 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:52:03.824001 systemd-logind[1485]: Removed session 10. Apr 30 03:52:03.974251 systemd[1]: Started sshd@12-157.180.64.98:22-139.178.68.195:48710.service - OpenSSH per-connection server daemon (139.178.68.195:48710). Apr 30 03:52:04.966180 sshd[5846]: Accepted publickey for core from 139.178.68.195 port 48710 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:04.969706 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:04.978521 systemd-logind[1485]: New session 11 of user core. Apr 30 03:52:04.984906 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:52:05.791611 sshd[5846]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:05.806432 systemd[1]: sshd@12-157.180.64.98:22-139.178.68.195:48710.service: Deactivated successfully. Apr 30 03:52:05.813500 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:52:05.815418 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:52:05.817378 systemd-logind[1485]: Removed session 11. Apr 30 03:52:10.964155 systemd[1]: Started sshd@13-157.180.64.98:22-139.178.68.195:39270.service - OpenSSH per-connection server daemon (139.178.68.195:39270). Apr 30 03:52:11.964099 sshd[5864]: Accepted publickey for core from 139.178.68.195 port 39270 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:11.967088 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:11.976129 systemd-logind[1485]: New session 12 of user core. Apr 30 03:52:11.983949 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:52:12.747736 sshd[5864]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:12.757218 systemd[1]: sshd@13-157.180.64.98:22-139.178.68.195:39270.service: Deactivated successfully. Apr 30 03:52:12.762647 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:52:12.765386 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:52:12.768831 systemd-logind[1485]: Removed session 12. Apr 30 03:52:17.917011 systemd[1]: Started sshd@14-157.180.64.98:22-139.178.68.195:60848.service - OpenSSH per-connection server daemon (139.178.68.195:60848). Apr 30 03:52:18.907880 sshd[5879]: Accepted publickey for core from 139.178.68.195 port 60848 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:18.909861 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:18.917184 systemd-logind[1485]: New session 13 of user core. Apr 30 03:52:18.923995 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:52:19.711985 sshd[5879]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:19.719611 systemd[1]: sshd@14-157.180.64.98:22-139.178.68.195:60848.service: Deactivated successfully. Apr 30 03:52:19.721785 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:52:19.722752 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:52:19.724376 systemd-logind[1485]: Removed session 13. Apr 30 03:52:24.887140 systemd[1]: Started sshd@15-157.180.64.98:22-139.178.68.195:60856.service - OpenSSH per-connection server daemon (139.178.68.195:60856). Apr 30 03:52:25.914111 sshd[5917]: Accepted publickey for core from 139.178.68.195 port 60856 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:25.917464 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:25.923753 systemd-logind[1485]: New session 14 of user core. Apr 30 03:52:25.929905 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:52:26.785392 sshd[5917]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:26.789716 systemd[1]: sshd@15-157.180.64.98:22-139.178.68.195:60856.service: Deactivated successfully. Apr 30 03:52:26.792649 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:52:26.795420 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:52:26.796760 systemd-logind[1485]: Removed session 14. Apr 30 03:52:26.958072 systemd[1]: Started sshd@16-157.180.64.98:22-139.178.68.195:45272.service - OpenSSH per-connection server daemon (139.178.68.195:45272). Apr 30 03:52:27.950321 sshd[5931]: Accepted publickey for core from 139.178.68.195 port 45272 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:27.952325 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:27.958910 systemd-logind[1485]: New session 15 of user core. Apr 30 03:52:27.962812 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:52:28.971205 sshd[5931]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:28.986882 systemd[1]: sshd@16-157.180.64.98:22-139.178.68.195:45272.service: Deactivated successfully. Apr 30 03:52:28.990520 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:52:28.992763 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:52:28.994384 systemd-logind[1485]: Removed session 15. Apr 30 03:52:29.139992 systemd[1]: Started sshd@17-157.180.64.98:22-139.178.68.195:45288.service - OpenSSH per-connection server daemon (139.178.68.195:45288). Apr 30 03:52:30.133600 sshd[5948]: Accepted publickey for core from 139.178.68.195 port 45288 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:30.136063 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:30.144315 systemd-logind[1485]: New session 16 of user core. Apr 30 03:52:30.150923 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:52:31.960016 sshd[5948]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:31.975401 systemd[1]: sshd@17-157.180.64.98:22-139.178.68.195:45288.service: Deactivated successfully. Apr 30 03:52:31.978119 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:52:31.979576 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:52:31.981262 systemd-logind[1485]: Removed session 16. Apr 30 03:52:32.134961 systemd[1]: Started sshd@18-157.180.64.98:22-139.178.68.195:45292.service - OpenSSH per-connection server daemon (139.178.68.195:45292). Apr 30 03:52:33.135769 sshd[6002]: Accepted publickey for core from 139.178.68.195 port 45292 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:33.138098 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:33.146776 systemd-logind[1485]: New session 17 of user core. Apr 30 03:52:33.152981 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:52:34.211258 sshd[6002]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:34.217822 systemd[1]: sshd@18-157.180.64.98:22-139.178.68.195:45292.service: Deactivated successfully. Apr 30 03:52:34.220616 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:52:34.222027 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:52:34.224825 systemd-logind[1485]: Removed session 17. Apr 30 03:52:34.385224 systemd[1]: Started sshd@19-157.180.64.98:22-139.178.68.195:45296.service - OpenSSH per-connection server daemon (139.178.68.195:45296). Apr 30 03:52:35.362195 sshd[6013]: Accepted publickey for core from 139.178.68.195 port 45296 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:35.365457 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:35.373810 systemd-logind[1485]: New session 18 of user core. Apr 30 03:52:35.376839 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:52:36.170593 sshd[6013]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:36.177623 systemd[1]: sshd@19-157.180.64.98:22-139.178.68.195:45296.service: Deactivated successfully. Apr 30 03:52:36.181724 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:52:36.182838 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:52:36.184523 systemd-logind[1485]: Removed session 18. Apr 30 03:52:41.343072 systemd[1]: Started sshd@20-157.180.64.98:22-139.178.68.195:36514.service - OpenSSH per-connection server daemon (139.178.68.195:36514). Apr 30 03:52:42.311864 sshd[6047]: Accepted publickey for core from 139.178.68.195 port 36514 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:42.313843 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:42.319531 systemd-logind[1485]: New session 19 of user core. Apr 30 03:52:42.328953 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:52:43.094659 sshd[6047]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:43.101665 systemd[1]: sshd@20-157.180.64.98:22-139.178.68.195:36514.service: Deactivated successfully. Apr 30 03:52:43.106343 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:52:43.108646 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:52:43.111533 systemd-logind[1485]: Removed session 19. Apr 30 03:52:48.269212 systemd[1]: Started sshd@21-157.180.64.98:22-139.178.68.195:35572.service - OpenSSH per-connection server daemon (139.178.68.195:35572). Apr 30 03:52:49.275875 sshd[6060]: Accepted publickey for core from 139.178.68.195 port 35572 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:49.278263 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:49.286466 systemd-logind[1485]: New session 20 of user core. Apr 30 03:52:49.291916 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:52:50.042942 sshd[6060]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:50.045984 systemd[1]: sshd@21-157.180.64.98:22-139.178.68.195:35572.service: Deactivated successfully. Apr 30 03:52:50.048541 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:52:50.050824 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:52:50.052603 systemd-logind[1485]: Removed session 20. Apr 30 03:53:05.676738 systemd[1]: cri-containerd-9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74.scope: Deactivated successfully. Apr 30 03:53:05.678317 systemd[1]: cri-containerd-9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74.scope: Consumed 6.152s CPU time. Apr 30 03:53:05.888042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74-rootfs.mount: Deactivated successfully. Apr 30 03:53:05.917051 containerd[1500]: time="2025-04-30T03:53:05.887253065Z" level=info msg="shim disconnected" id=9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74 namespace=k8s.io Apr 30 03:53:05.927817 containerd[1500]: time="2025-04-30T03:53:05.927532317Z" level=warning msg="cleaning up after shim disconnected" id=9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74 namespace=k8s.io Apr 30 03:53:05.927817 containerd[1500]: time="2025-04-30T03:53:05.927598280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:53:06.180730 kubelet[2723]: E0430 03:53:06.170727 2723 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55930->10.0.0.2:2379: read: connection timed out" Apr 30 03:53:06.723106 kubelet[2723]: I0430 03:53:06.723058 2723 scope.go:117] "RemoveContainer" containerID="9774597ae3c1baecabbb3c89373ca9766c5d038420e582be3480aed2fbf74f74" Apr 30 03:53:06.734963 systemd[1]: cri-containerd-6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf.scope: Deactivated successfully. Apr 30 03:53:06.735412 systemd[1]: cri-containerd-6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf.scope: Consumed 6.946s CPU time, 23.9M memory peak, 0B memory swap peak. Apr 30 03:53:06.775247 containerd[1500]: time="2025-04-30T03:53:06.770994895Z" level=info msg="shim disconnected" id=6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf namespace=k8s.io Apr 30 03:53:06.775247 containerd[1500]: time="2025-04-30T03:53:06.771156137Z" level=warning msg="cleaning up after shim disconnected" id=6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf namespace=k8s.io Apr 30 03:53:06.775247 containerd[1500]: time="2025-04-30T03:53:06.771170464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:53:06.774904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf-rootfs.mount: Deactivated successfully. Apr 30 03:53:06.803294 containerd[1500]: time="2025-04-30T03:53:06.803203109Z" level=info msg="CreateContainer within sandbox \"aba9bdf83e8a30a5702d0215d49d8543f638b85e381b4efd51ed7f883d611611\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 03:53:06.906564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360432197.mount: Deactivated successfully. Apr 30 03:53:06.927871 containerd[1500]: time="2025-04-30T03:53:06.927803778Z" level=info msg="CreateContainer within sandbox \"aba9bdf83e8a30a5702d0215d49d8543f638b85e381b4efd51ed7f883d611611\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"eeb69a67f7092420dcd6e2ad8a5d15f186cb73b97d42abcef570c83bbcefedd6\"" Apr 30 03:53:06.928553 containerd[1500]: time="2025-04-30T03:53:06.928488602Z" level=info msg="StartContainer for \"eeb69a67f7092420dcd6e2ad8a5d15f186cb73b97d42abcef570c83bbcefedd6\"" Apr 30 03:53:06.967869 systemd[1]: Started cri-containerd-eeb69a67f7092420dcd6e2ad8a5d15f186cb73b97d42abcef570c83bbcefedd6.scope - libcontainer container eeb69a67f7092420dcd6e2ad8a5d15f186cb73b97d42abcef570c83bbcefedd6. Apr 30 03:53:07.009884 containerd[1500]: time="2025-04-30T03:53:07.009736005Z" level=info msg="StartContainer for \"eeb69a67f7092420dcd6e2ad8a5d15f186cb73b97d42abcef570c83bbcefedd6\" returns successfully" Apr 30 03:53:07.715230 kubelet[2723]: I0430 03:53:07.715123 2723 scope.go:117] "RemoveContainer" containerID="6881c3738463103d8e093fcfcb5cf0c9317e5a1ddaaaf314aae89b116814efaf" Apr 30 03:53:07.718619 containerd[1500]: time="2025-04-30T03:53:07.718570298Z" level=info msg="CreateContainer within sandbox \"ec0f94c9b3cfcff5340cf56753ad0a9c33aee75ad4c690bebcd764cb49438f72\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:53:07.753110 containerd[1500]: time="2025-04-30T03:53:07.752978037Z" level=info msg="CreateContainer within sandbox \"ec0f94c9b3cfcff5340cf56753ad0a9c33aee75ad4c690bebcd764cb49438f72\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6f5bd9f94f4603668571f413202256847c390c7ef508b0fdc89893f5d633b86e\"" Apr 30 03:53:07.754385 containerd[1500]: time="2025-04-30T03:53:07.754324582Z" level=info msg="StartContainer for \"6f5bd9f94f4603668571f413202256847c390c7ef508b0fdc89893f5d633b86e\"" Apr 30 03:53:07.795948 systemd[1]: Started cri-containerd-6f5bd9f94f4603668571f413202256847c390c7ef508b0fdc89893f5d633b86e.scope - libcontainer container 6f5bd9f94f4603668571f413202256847c390c7ef508b0fdc89893f5d633b86e. Apr 30 03:53:07.861167 containerd[1500]: time="2025-04-30T03:53:07.861094371Z" level=info msg="StartContainer for \"6f5bd9f94f4603668571f413202256847c390c7ef508b0fdc89893f5d633b86e\" returns successfully" Apr 30 03:53:07.896904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995070706.mount: Deactivated successfully. Apr 30 03:53:11.015607 kubelet[2723]: E0430 03:53:10.997955 2723 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55732->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-b-745f04f342.183afc456160e567 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-b-745f04f342,UID:beadcbfe98d763a38c7be325fa9bca59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-b-745f04f342,},FirstTimestamp:2025-04-30 03:53:00.476171623 +0000 UTC m=+348.310408271,LastTimestamp:2025-04-30 03:53:00.476171623 +0000 UTC m=+348.310408271,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-b-745f04f342,}" Apr 30 03:53:11.440970 systemd[1]: cri-containerd-07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026.scope: Deactivated successfully. Apr 30 03:53:11.442503 systemd[1]: cri-containerd-07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026.scope: Consumed 4.392s CPU time, 25.4M memory peak, 0B memory swap peak. Apr 30 03:53:11.487025 containerd[1500]: time="2025-04-30T03:53:11.486556409Z" level=info msg="shim disconnected" id=07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026 namespace=k8s.io Apr 30 03:53:11.487025 containerd[1500]: time="2025-04-30T03:53:11.486650916Z" level=warning msg="cleaning up after shim disconnected" id=07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026 namespace=k8s.io Apr 30 03:53:11.487025 containerd[1500]: time="2025-04-30T03:53:11.486671735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:53:11.493420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026-rootfs.mount: Deactivated successfully. Apr 30 03:53:11.514032 containerd[1500]: time="2025-04-30T03:53:11.513954873Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:53:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:53:11.732190 kubelet[2723]: I0430 03:53:11.732052 2723 scope.go:117] "RemoveContainer" containerID="07f469e936c31e45d3aeb6a09d935069fcd6b593af4054744cf5844d26ef9026" Apr 30 03:53:11.734259 containerd[1500]: time="2025-04-30T03:53:11.734198832Z" level=info msg="CreateContainer within sandbox \"93a3255949a8c8e4494bfc6a9258af9385156f475e77330fe09233388a5c70f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:53:11.759494 containerd[1500]: time="2025-04-30T03:53:11.757940589Z" level=info msg="CreateContainer within sandbox \"93a3255949a8c8e4494bfc6a9258af9385156f475e77330fe09233388a5c70f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"47d4bbf47bac6e8ede0d83176485cc285127c9b1311b1aa9dce284cdf3ed51b0\"" Apr 30 03:53:11.760720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195923321.mount: Deactivated successfully. Apr 30 03:53:11.763869 containerd[1500]: time="2025-04-30T03:53:11.761289578Z" level=info msg="StartContainer for \"47d4bbf47bac6e8ede0d83176485cc285127c9b1311b1aa9dce284cdf3ed51b0\"" Apr 30 03:53:11.829916 systemd[1]: Started cri-containerd-47d4bbf47bac6e8ede0d83176485cc285127c9b1311b1aa9dce284cdf3ed51b0.scope - libcontainer container 47d4bbf47bac6e8ede0d83176485cc285127c9b1311b1aa9dce284cdf3ed51b0. Apr 30 03:53:11.889010 containerd[1500]: time="2025-04-30T03:53:11.888929237Z" level=info msg="StartContainer for \"47d4bbf47bac6e8ede0d83176485cc285127c9b1311b1aa9dce284cdf3ed51b0\" returns successfully" Apr 30 03:53:12.495304 systemd[1]: run-containerd-runc-k8s.io-47d4bbf47bac6e8ede0d83176485cc285127c9b1311b1aa9dce284cdf3ed51b0-runc.IO8iyu.mount: Deactivated successfully.