Apr 30 03:36:32.058975 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:36:32.058997 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:36:32.059005 kernel: BIOS-provided physical RAM map: Apr 30 03:36:32.059010 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:36:32.059016 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:36:32.059021 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:36:32.059028 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Apr 30 03:36:32.059034 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Apr 30 03:36:32.059041 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:36:32.059047 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 03:36:32.059052 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:36:32.059068 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:36:32.059074 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 03:36:32.059079 kernel: NX (Execute Disable) protection: active Apr 30 03:36:32.059088 kernel: APIC: Static calls initialized Apr 30 03:36:32.059094 kernel: SMBIOS 3.0.0 present. Apr 30 03:36:32.059100 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 30 03:36:32.059107 kernel: Hypervisor detected: KVM Apr 30 03:36:32.059113 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:36:32.059119 kernel: kvm-clock: using sched offset of 3237297856 cycles Apr 30 03:36:32.059126 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:36:32.059133 kernel: tsc: Detected 2495.312 MHz processor Apr 30 03:36:32.059139 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:36:32.059147 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:36:32.059153 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Apr 30 03:36:32.059160 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:36:32.059166 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:36:32.059172 kernel: Using GB pages for direct mapping Apr 30 03:36:32.059178 kernel: ACPI: Early table checksum verification disabled Apr 30 03:36:32.059185 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Apr 30 03:36:32.059191 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:36:32.059197 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:36:32.059205 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:36:32.059211 kernel: ACPI: FACS 0x000000007CFE0000 000040 Apr 30 03:36:32.059218 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:36:32.059224 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:36:32.059230 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:36:32.059236 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:36:32.059243 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Apr 30 03:36:32.059249 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Apr 30 03:36:32.059259 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Apr 30 03:36:32.059265 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Apr 30 03:36:32.059272 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Apr 30 03:36:32.059279 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Apr 30 03:36:32.059285 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Apr 30 03:36:32.059292 kernel: No NUMA configuration found Apr 30 03:36:32.059299 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Apr 30 03:36:32.059306 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Apr 30 03:36:32.059312 kernel: Zone ranges: Apr 30 03:36:32.059319 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:36:32.059326 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Apr 30 03:36:32.059332 kernel: Normal empty Apr 30 03:36:32.059339 kernel: Movable zone start for each node Apr 30 03:36:32.059345 kernel: Early memory node ranges Apr 30 03:36:32.059352 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:36:32.059358 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Apr 30 03:36:32.059366 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Apr 30 03:36:32.059372 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:36:32.059379 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:36:32.059385 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 03:36:32.059392 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:36:32.059399 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:36:32.059405 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:36:32.059412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:36:32.059419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:36:32.059427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:36:32.059433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:36:32.059440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:36:32.059447 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:36:32.059453 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:36:32.059460 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:36:32.059466 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:36:32.059473 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 03:36:32.059479 kernel: Booting paravirtualized kernel on KVM Apr 30 03:36:32.059488 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:36:32.059494 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:36:32.059501 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:36:32.059508 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:36:32.059514 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:36:32.059521 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:36:32.059528 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:36:32.059535 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:36:32.059543 kernel: random: crng init done Apr 30 03:36:32.059550 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:36:32.059556 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:36:32.059563 kernel: Fallback order for Node 0: 0 Apr 30 03:36:32.059570 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Apr 30 03:36:32.059576 kernel: Policy zone: DMA32 Apr 30 03:36:32.059583 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:36:32.059589 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125152K reserved, 0K cma-reserved) Apr 30 03:36:32.059596 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:36:32.059604 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:36:32.059610 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:36:32.059617 kernel: Dynamic Preempt: voluntary Apr 30 03:36:32.059623 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:36:32.059632 kernel: rcu: RCU event tracing is enabled. Apr 30 03:36:32.059639 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:36:32.059646 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:36:32.059653 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:36:32.059659 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:36:32.059667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:36:32.059674 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:36:32.059680 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:36:32.059687 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:36:32.059693 kernel: Console: colour VGA+ 80x25 Apr 30 03:36:32.059700 kernel: printk: console [tty0] enabled Apr 30 03:36:32.059707 kernel: printk: console [ttyS0] enabled Apr 30 03:36:32.059713 kernel: ACPI: Core revision 20230628 Apr 30 03:36:32.059720 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:36:32.059728 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:36:32.059735 kernel: x2apic enabled Apr 30 03:36:32.059741 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:36:32.059748 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:36:32.059754 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:36:32.059761 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Apr 30 03:36:32.059768 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:36:32.059774 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:36:32.059781 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:36:32.059794 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:36:32.059801 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:36:32.059808 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:36:32.061835 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:36:32.061842 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:36:32.061849 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:36:32.061856 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:36:32.061863 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:36:32.061871 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:36:32.061879 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:36:32.061886 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:36:32.061893 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:36:32.061900 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:36:32.061907 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:36:32.061914 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:36:32.061921 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:36:32.061929 kernel: landlock: Up and running. Apr 30 03:36:32.061936 kernel: SELinux: Initializing. Apr 30 03:36:32.061943 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:36:32.061950 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:36:32.061957 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:36:32.061964 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:36:32.061971 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:36:32.061978 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:36:32.061985 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:36:32.061993 kernel: ... version: 0 Apr 30 03:36:32.062000 kernel: ... bit width: 48 Apr 30 03:36:32.062007 kernel: ... generic registers: 6 Apr 30 03:36:32.062014 kernel: ... value mask: 0000ffffffffffff Apr 30 03:36:32.062021 kernel: ... max period: 00007fffffffffff Apr 30 03:36:32.062028 kernel: ... fixed-purpose events: 0 Apr 30 03:36:32.062035 kernel: ... event mask: 000000000000003f Apr 30 03:36:32.062042 kernel: signal: max sigframe size: 1776 Apr 30 03:36:32.062049 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:36:32.062064 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:36:32.062071 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:36:32.062078 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:36:32.062085 kernel: .... node #0, CPUs: #1 Apr 30 03:36:32.062092 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:36:32.062099 kernel: smpboot: Max logical packages: 1 Apr 30 03:36:32.062106 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Apr 30 03:36:32.062113 kernel: devtmpfs: initialized Apr 30 03:36:32.062119 kernel: x86/mm: Memory block size: 128MB Apr 30 03:36:32.062127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:36:32.062135 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:36:32.062142 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:36:32.062149 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:36:32.062156 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:36:32.062163 kernel: audit: type=2000 audit(1745984190.675:1): state=initialized audit_enabled=0 res=1 Apr 30 03:36:32.062170 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:36:32.062176 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:36:32.062183 kernel: cpuidle: using governor menu Apr 30 03:36:32.062190 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:36:32.062199 kernel: dca service started, version 1.12.1 Apr 30 03:36:32.062206 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:36:32.062213 kernel: PCI: Using configuration type 1 for base access Apr 30 03:36:32.062220 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:36:32.062227 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:36:32.062234 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:36:32.062241 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:36:32.062247 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:36:32.062256 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:36:32.062263 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:36:32.062269 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:36:32.062276 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:36:32.062283 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:36:32.062290 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:36:32.062297 kernel: ACPI: Interpreter enabled Apr 30 03:36:32.062304 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:36:32.062311 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:36:32.062318 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:36:32.062326 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:36:32.062333 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:36:32.062340 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:36:32.062460 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:36:32.062536 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:36:32.062607 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:36:32.062616 kernel: PCI host bridge to bus 0000:00 Apr 30 03:36:32.062700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:36:32.062765 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:36:32.062843 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:36:32.062909 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Apr 30 03:36:32.062970 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:36:32.063032 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 03:36:32.063106 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:36:32.063198 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:36:32.063278 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:36:32.063351 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Apr 30 03:36:32.063423 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Apr 30 03:36:32.063494 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Apr 30 03:36:32.063565 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Apr 30 03:36:32.063641 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:36:32.063721 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.063794 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Apr 30 03:36:32.065959 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.066956 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Apr 30 03:36:32.067124 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.067213 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Apr 30 03:36:32.067299 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.067375 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Apr 30 03:36:32.067458 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.067534 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Apr 30 03:36:32.067617 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.067703 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Apr 30 03:36:32.067786 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.067885 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Apr 30 03:36:32.067967 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.068042 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Apr 30 03:36:32.068137 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:36:32.068213 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Apr 30 03:36:32.068304 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:36:32.068380 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:36:32.068461 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:36:32.068535 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Apr 30 03:36:32.068610 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Apr 30 03:36:32.068690 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:36:32.068769 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 03:36:32.070919 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:36:32.071004 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Apr 30 03:36:32.071089 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 03:36:32.071162 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Apr 30 03:36:32.071236 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:36:32.071309 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:36:32.071385 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:36:32.071470 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 03:36:32.071547 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Apr 30 03:36:32.071619 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:36:32.071690 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:36:32.071762 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:36:32.072270 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 03:36:32.072352 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Apr 30 03:36:32.072426 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Apr 30 03:36:32.072497 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:36:32.072566 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:36:32.072636 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:36:32.072716 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 03:36:32.072798 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 03:36:32.072885 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:36:32.072958 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:36:32.073028 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:36:32.073121 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 03:36:32.073198 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Apr 30 03:36:32.073272 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Apr 30 03:36:32.073345 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:36:32.073420 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:36:32.073493 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:36:32.073572 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 03:36:32.073648 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Apr 30 03:36:32.073724 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Apr 30 03:36:32.073795 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:36:32.073985 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:36:32.074073 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:36:32.074083 kernel: acpiphp: Slot [0] registered Apr 30 03:36:32.074167 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:36:32.074240 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Apr 30 03:36:32.074312 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Apr 30 03:36:32.074385 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Apr 30 03:36:32.074453 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:36:32.074522 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:36:32.074596 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:36:32.074606 kernel: acpiphp: Slot [0-2] registered Apr 30 03:36:32.074675 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:36:32.074773 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:36:32.074857 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:36:32.074867 kernel: acpiphp: Slot [0-3] registered Apr 30 03:36:32.074938 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:36:32.075008 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:36:32.075094 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:36:32.075104 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:36:32.075112 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:36:32.075120 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:36:32.075127 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:36:32.075135 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:36:32.075142 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:36:32.075150 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:36:32.075157 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:36:32.075167 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:36:32.075174 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:36:32.075181 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:36:32.075188 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:36:32.075196 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:36:32.075203 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:36:32.075210 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:36:32.075217 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:36:32.075225 kernel: iommu: Default domain type: Translated Apr 30 03:36:32.075234 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:36:32.075241 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:36:32.075248 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:36:32.075256 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:36:32.075263 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Apr 30 03:36:32.075338 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:36:32.075409 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:36:32.075478 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:36:32.075488 kernel: vgaarb: loaded Apr 30 03:36:32.075498 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:36:32.075505 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:36:32.075513 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:36:32.075520 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:36:32.075528 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:36:32.075535 kernel: pnp: PnP ACPI init Apr 30 03:36:32.075614 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:36:32.075626 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:36:32.075635 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:36:32.075642 kernel: NET: Registered PF_INET protocol family Apr 30 03:36:32.075650 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:36:32.075658 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:36:32.075666 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:36:32.075675 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:36:32.075683 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:36:32.075692 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:36:32.075700 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:36:32.075709 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:36:32.075716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:36:32.075723 kernel: NET: Registered PF_XDP protocol family Apr 30 03:36:32.075795 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 03:36:32.075918 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 03:36:32.075990 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 03:36:32.076070 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 03:36:32.076146 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 03:36:32.076243 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 03:36:32.076313 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:36:32.076383 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:36:32.076453 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:36:32.076523 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:36:32.076593 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:36:32.076667 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:36:32.076744 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:36:32.076827 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:36:32.076899 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:36:32.076969 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:36:32.077039 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:36:32.077121 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:36:32.077192 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:36:32.077267 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:36:32.077350 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:36:32.077420 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:36:32.077491 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:36:32.077561 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:36:32.077630 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:36:32.077700 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 30 03:36:32.077770 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:36:32.077889 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:36:32.077961 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:36:32.078035 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 30 03:36:32.078115 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:36:32.078184 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:36:32.078254 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:36:32.078327 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 30 03:36:32.078397 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:36:32.078470 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:36:32.078541 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:36:32.078605 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:36:32.078667 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:36:32.078731 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Apr 30 03:36:32.078793 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:36:32.078869 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 03:36:32.078944 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 03:36:32.079011 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:36:32.079093 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 03:36:32.079160 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:36:32.079235 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 03:36:32.079301 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:36:32.079374 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 03:36:32.079439 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:36:32.079511 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 03:36:32.079577 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:36:32.079657 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 03:36:32.079728 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:36:32.079800 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 30 03:36:32.079919 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 03:36:32.079984 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:36:32.080066 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 30 03:36:32.080157 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Apr 30 03:36:32.080225 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:36:32.080296 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 30 03:36:32.080361 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 03:36:32.080427 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:36:32.080437 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:36:32.080445 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:36:32.080453 kernel: Initialise system trusted keyrings Apr 30 03:36:32.080464 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:36:32.080471 kernel: Key type asymmetric registered Apr 30 03:36:32.080479 kernel: Asymmetric key parser 'x509' registered Apr 30 03:36:32.080486 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:36:32.080494 kernel: io scheduler mq-deadline registered Apr 30 03:36:32.080502 kernel: io scheduler kyber registered Apr 30 03:36:32.080511 kernel: io scheduler bfq registered Apr 30 03:36:32.080583 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 03:36:32.080655 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 03:36:32.080729 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 03:36:32.080799 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 03:36:32.080885 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 03:36:32.080956 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 03:36:32.081028 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 03:36:32.081108 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 03:36:32.081179 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 03:36:32.081249 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 03:36:32.081323 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 03:36:32.081422 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 03:36:32.081493 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 03:36:32.081563 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 03:36:32.081633 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 03:36:32.081707 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 03:36:32.081718 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:36:32.081786 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 30 03:36:32.081910 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 30 03:36:32.081924 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:36:32.081933 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 30 03:36:32.081940 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:36:32.081948 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:36:32.081956 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:36:32.081963 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:36:32.081971 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:36:32.081979 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:36:32.082052 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:36:32.082130 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:36:32.082194 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:36:31 UTC (1745984191) Apr 30 03:36:32.082258 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:36:32.082268 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:36:32.082276 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:36:32.082283 kernel: Segment Routing with IPv6 Apr 30 03:36:32.082291 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:36:32.082301 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:36:32.082309 kernel: Key type dns_resolver registered Apr 30 03:36:32.082317 kernel: IPI shorthand broadcast: enabled Apr 30 03:36:32.082324 kernel: sched_clock: Marking stable (1307017954, 145170329)->(1463360894, -11172611) Apr 30 03:36:32.082332 kernel: registered taskstats version 1 Apr 30 03:36:32.082340 kernel: Loading compiled-in X.509 certificates Apr 30 03:36:32.082347 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:36:32.082355 kernel: Key type .fscrypt registered Apr 30 03:36:32.082362 kernel: Key type fscrypt-provisioning registered Apr 30 03:36:32.082371 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:36:32.082378 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:36:32.082386 kernel: ima: No architecture policies found Apr 30 03:36:32.082393 kernel: clk: Disabling unused clocks Apr 30 03:36:32.082401 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:36:32.082408 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:36:32.082416 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:36:32.082424 kernel: Run /init as init process Apr 30 03:36:32.082431 kernel: with arguments: Apr 30 03:36:32.082440 kernel: /init Apr 30 03:36:32.082447 kernel: with environment: Apr 30 03:36:32.082454 kernel: HOME=/ Apr 30 03:36:32.082462 kernel: TERM=linux Apr 30 03:36:32.082469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:36:32.082479 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:36:32.082490 systemd[1]: Detected virtualization kvm. Apr 30 03:36:32.082498 systemd[1]: Detected architecture x86-64. Apr 30 03:36:32.082507 systemd[1]: Running in initrd. Apr 30 03:36:32.082514 systemd[1]: No hostname configured, using default hostname. Apr 30 03:36:32.082522 systemd[1]: Hostname set to . Apr 30 03:36:32.082530 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:36:32.082538 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:36:32.082546 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:36:32.082554 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:36:32.082563 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:36:32.082572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:36:32.082580 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:36:32.082612 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:36:32.082622 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:36:32.082630 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:36:32.082638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:36:32.082647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:36:32.082656 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:36:32.082665 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:36:32.082675 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:36:32.082684 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:36:32.082691 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:36:32.082699 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:36:32.082707 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:36:32.082715 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:36:32.082724 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:36:32.082733 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:36:32.082740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:36:32.082748 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:36:32.082756 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:36:32.082764 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:36:32.082772 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:36:32.082780 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:36:32.082788 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:36:32.082797 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:36:32.082805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:36:32.082825 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:36:32.082833 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:36:32.082860 systemd-journald[188]: Collecting audit messages is disabled. Apr 30 03:36:32.082883 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:36:32.082891 kernel: Bridge firewalling registered Apr 30 03:36:32.082899 systemd-journald[188]: Journal started Apr 30 03:36:32.082918 systemd-journald[188]: Runtime Journal (/run/log/journal/16d1238d50e347d8ae21c7c772b06be4) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:36:32.044285 systemd-modules-load[189]: Inserted module 'overlay' Apr 30 03:36:32.070608 systemd-modules-load[189]: Inserted module 'br_netfilter' Apr 30 03:36:32.098850 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:36:32.099101 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:36:32.100184 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:36:32.101474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:36:32.109036 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:36:32.110856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:36:32.112924 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:36:32.122104 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:36:32.128345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:36:32.137027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:36:32.137854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:36:32.140050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:36:32.141252 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:36:32.142551 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:36:32.147964 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:36:32.152965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:36:32.159276 dracut-cmdline[222]: dracut-dracut-053 Apr 30 03:36:32.161547 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:36:32.179658 systemd-resolved[226]: Positive Trust Anchors: Apr 30 03:36:32.180337 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:36:32.180368 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:36:32.182627 systemd-resolved[226]: Defaulting to hostname 'linux'. Apr 30 03:36:32.189487 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:36:32.190291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:36:32.217866 kernel: SCSI subsystem initialized Apr 30 03:36:32.226854 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:36:32.243857 kernel: iscsi: registered transport (tcp) Apr 30 03:36:32.263089 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:36:32.263180 kernel: QLogic iSCSI HBA Driver Apr 30 03:36:32.300045 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:36:32.302953 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:36:32.327115 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:36:32.327192 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:36:32.327209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:36:32.376890 kernel: raid6: avx2x4 gen() 19413 MB/s Apr 30 03:36:32.393863 kernel: raid6: avx2x2 gen() 20786 MB/s Apr 30 03:36:32.410983 kernel: raid6: avx2x1 gen() 26475 MB/s Apr 30 03:36:32.411024 kernel: raid6: using algorithm avx2x1 gen() 26475 MB/s Apr 30 03:36:32.429156 kernel: raid6: .... xor() 16016 MB/s, rmw enabled Apr 30 03:36:32.429223 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:36:32.448866 kernel: xor: automatically using best checksumming function avx Apr 30 03:36:32.597872 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:36:32.613955 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:36:32.622115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:36:32.633259 systemd-udevd[408]: Using default interface naming scheme 'v255'. Apr 30 03:36:32.636985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:36:32.647034 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:36:32.662969 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Apr 30 03:36:32.692719 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:36:32.701984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:36:32.754069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:36:32.765077 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:36:32.787948 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:36:32.790339 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:36:32.791583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:36:32.793107 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:36:32.800999 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:36:32.816518 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:36:32.848033 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:36:32.848224 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:36:32.850727 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 03:36:32.868685 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:36:32.868824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:36:32.869526 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:36:32.872107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:36:32.872223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:36:32.872723 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:36:32.905204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:36:32.919852 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:36:32.919892 kernel: libata version 3.00 loaded. Apr 30 03:36:32.921838 kernel: ACPI: bus type USB registered Apr 30 03:36:32.921862 kernel: AES CTR mode by8 optimization enabled Apr 30 03:36:32.934847 kernel: usbcore: registered new interface driver usbfs Apr 30 03:36:32.934910 kernel: usbcore: registered new interface driver hub Apr 30 03:36:32.934924 kernel: usbcore: registered new device driver usb Apr 30 03:36:32.945830 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:36:32.960144 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:36:32.960160 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:36:32.960260 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:36:32.960356 kernel: scsi host1: ahci Apr 30 03:36:32.960452 kernel: scsi host2: ahci Apr 30 03:36:32.960537 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 30 03:36:32.961021 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 03:36:32.961144 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:36:32.961238 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 30 03:36:32.961329 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:36:32.961418 kernel: scsi host3: ahci Apr 30 03:36:32.961505 kernel: scsi host4: ahci Apr 30 03:36:32.961595 kernel: scsi host5: ahci Apr 30 03:36:32.961679 kernel: scsi host6: ahci Apr 30 03:36:32.961762 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Apr 30 03:36:32.961771 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Apr 30 03:36:32.961783 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Apr 30 03:36:32.961792 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Apr 30 03:36:32.961800 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Apr 30 03:36:32.961825 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Apr 30 03:36:32.961834 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:36:32.961842 kernel: GPT:17805311 != 80003071 Apr 30 03:36:32.961850 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:36:32.961859 kernel: GPT:17805311 != 80003071 Apr 30 03:36:32.961867 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:36:32.961877 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:36:32.961886 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:36:33.007609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:36:33.013974 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:36:33.027933 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:36:33.277500 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:36:33.277621 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:36:33.277653 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:36:33.277682 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 03:36:33.280871 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:36:33.285828 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:36:33.287793 kernel: ata1.00: applying bridge limits Apr 30 03:36:33.290856 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:36:33.290930 kernel: ata1.00: configured for UDMA/100 Apr 30 03:36:33.297866 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:36:33.333869 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:36:33.400551 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 03:36:33.400779 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 03:36:33.401011 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:36:33.401227 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 03:36:33.401419 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 03:36:33.401689 kernel: hub 1-0:1.0: USB hub found Apr 30 03:36:33.401959 kernel: hub 1-0:1.0: 4 ports detected Apr 30 03:36:33.402698 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 03:36:33.402985 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:36:33.413902 kernel: hub 2-0:1.0: USB hub found Apr 30 03:36:33.414067 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:36:33.414089 kernel: hub 2-0:1.0: 4 ports detected Apr 30 03:36:33.414223 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (470) Apr 30 03:36:33.414238 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (463) Apr 30 03:36:33.414251 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:36:33.424553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 03:36:33.439646 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 03:36:33.449913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:36:33.454761 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 03:36:33.455364 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 03:36:33.464083 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:36:33.469861 disk-uuid[582]: Primary Header is updated. Apr 30 03:36:33.469861 disk-uuid[582]: Secondary Entries is updated. Apr 30 03:36:33.469861 disk-uuid[582]: Secondary Header is updated. Apr 30 03:36:33.480851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:36:33.489840 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:36:33.621975 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 03:36:33.764874 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:36:33.769945 kernel: usbcore: registered new interface driver usbhid Apr 30 03:36:33.769991 kernel: usbhid: USB HID core driver Apr 30 03:36:33.777500 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 30 03:36:33.777546 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 03:36:34.500333 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:36:34.500413 disk-uuid[584]: The operation has completed successfully. Apr 30 03:36:34.570133 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:36:34.570244 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:36:34.584002 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:36:34.590855 sh[600]: Success Apr 30 03:36:34.609858 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:36:34.669001 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:36:34.675970 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:36:34.680229 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:36:34.693470 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:36:34.693529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:36:34.693543 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:36:34.695896 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:36:34.697174 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:36:34.706857 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:36:34.708203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:36:34.709269 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:36:34.714974 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:36:34.719036 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:36:34.734939 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:36:34.735004 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:36:34.735020 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:36:34.740779 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:36:34.740837 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:36:34.750237 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:36:34.749963 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:36:34.757392 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:36:34.761950 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:36:34.780553 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:36:34.786968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:36:34.810002 systemd-networkd[781]: lo: Link UP Apr 30 03:36:34.810010 systemd-networkd[781]: lo: Gained carrier Apr 30 03:36:34.811942 systemd-networkd[781]: Enumeration completed Apr 30 03:36:34.812527 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:34.812530 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:36:34.813472 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:34.813475 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:36:34.817949 systemd-networkd[781]: eth0: Link UP Apr 30 03:36:34.817952 systemd-networkd[781]: eth0: Gained carrier Apr 30 03:36:34.817962 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:34.818200 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:36:34.819870 systemd[1]: Reached target network.target - Network. Apr 30 03:36:34.824757 systemd-networkd[781]: eth1: Link UP Apr 30 03:36:34.824760 systemd-networkd[781]: eth1: Gained carrier Apr 30 03:36:34.824772 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:34.840310 ignition[748]: Ignition 2.19.0 Apr 30 03:36:34.840319 ignition[748]: Stage: fetch-offline Apr 30 03:36:34.841956 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:36:34.840349 ignition[748]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:36:34.840355 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:36:34.840441 ignition[748]: parsed url from cmdline: "" Apr 30 03:36:34.840444 ignition[748]: no config URL provided Apr 30 03:36:34.840448 ignition[748]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:36:34.840453 ignition[748]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:36:34.840457 ignition[748]: failed to fetch config: resource requires networking Apr 30 03:36:34.840605 ignition[748]: Ignition finished successfully Apr 30 03:36:34.851251 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:36:34.854895 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:36:34.863217 ignition[789]: Ignition 2.19.0 Apr 30 03:36:34.863226 ignition[789]: Stage: fetch Apr 30 03:36:34.863383 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:36:34.863391 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:36:34.863457 ignition[789]: parsed url from cmdline: "" Apr 30 03:36:34.863460 ignition[789]: no config URL provided Apr 30 03:36:34.863463 ignition[789]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:36:34.863469 ignition[789]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:36:34.863484 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 03:36:34.863599 ignition[789]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 03:36:34.876884 systemd-networkd[781]: eth0: DHCPv4 address 37.27.214.59/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:36:35.064145 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 03:36:35.069303 ignition[789]: GET result: OK Apr 30 03:36:35.069436 ignition[789]: parsing config with SHA512: 9c94414091118053ed208a96e61d41b600049fb912df60707cfb8b20a76089bf2a278be9e740119893a7648efbd059441784239aced5776afe4b4649607d3f24 Apr 30 03:36:35.076226 unknown[789]: fetched base config from "system" Apr 30 03:36:35.076244 unknown[789]: fetched base config from "system" Apr 30 03:36:35.077342 ignition[789]: fetch: fetch complete Apr 30 03:36:35.076253 unknown[789]: fetched user config from "hetzner" Apr 30 03:36:35.077353 ignition[789]: fetch: fetch passed Apr 30 03:36:35.080601 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:36:35.077421 ignition[789]: Ignition finished successfully Apr 30 03:36:35.088109 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:36:35.112779 ignition[796]: Ignition 2.19.0 Apr 30 03:36:35.112803 ignition[796]: Stage: kargs Apr 30 03:36:35.113320 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:36:35.113344 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:36:35.116211 ignition[796]: kargs: kargs passed Apr 30 03:36:35.116309 ignition[796]: Ignition finished successfully Apr 30 03:36:35.120632 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:36:35.130139 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:36:35.176070 ignition[803]: Ignition 2.19.0 Apr 30 03:36:35.176089 ignition[803]: Stage: disks Apr 30 03:36:35.176362 ignition[803]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:36:35.176379 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:36:35.179613 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:36:35.177929 ignition[803]: disks: disks passed Apr 30 03:36:35.181749 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:36:35.178001 ignition[803]: Ignition finished successfully Apr 30 03:36:35.184648 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:36:35.186258 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:36:35.188063 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:36:35.189579 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:36:35.197022 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:36:35.217033 systemd-fsck[812]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 03:36:35.221009 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:36:35.228000 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:36:35.314827 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:36:35.314793 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:36:35.315669 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:36:35.322873 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:36:35.324675 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:36:35.327995 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:36:35.329053 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:36:35.329081 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:36:35.333709 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:36:35.336218 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (820) Apr 30 03:36:35.337507 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:36:35.352432 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:36:35.352466 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:36:35.352488 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:36:35.352509 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:36:35.352530 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:36:35.358705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:36:35.396070 coreos-metadata[822]: Apr 30 03:36:35.395 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 03:36:35.398139 coreos-metadata[822]: Apr 30 03:36:35.397 INFO Fetch successful Apr 30 03:36:35.399881 coreos-metadata[822]: Apr 30 03:36:35.398 INFO wrote hostname ci-4081-3-3-9-916214001e to /sysroot/etc/hostname Apr 30 03:36:35.401832 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:36:35.403756 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:36:35.408562 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:36:35.411675 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:36:35.414524 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:36:35.497091 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:36:35.504002 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:36:35.508004 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:36:35.517912 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:36:35.538946 ignition[937]: INFO : Ignition 2.19.0 Apr 30 03:36:35.538946 ignition[937]: INFO : Stage: mount Apr 30 03:36:35.540720 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:36:35.540720 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:36:35.542334 ignition[937]: INFO : mount: mount passed Apr 30 03:36:35.542334 ignition[937]: INFO : Ignition finished successfully Apr 30 03:36:35.543422 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:36:35.548952 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:36:35.550498 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:36:35.691434 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:36:35.695937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:36:35.706840 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (948) Apr 30 03:36:35.709113 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:36:35.709143 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:36:35.713490 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:36:35.722252 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:36:35.722361 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:36:35.726963 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:36:35.758572 ignition[965]: INFO : Ignition 2.19.0 Apr 30 03:36:35.758572 ignition[965]: INFO : Stage: files Apr 30 03:36:35.760261 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:36:35.760261 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:36:35.762262 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:36:35.762262 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:36:35.762262 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:36:35.766319 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:36:35.767914 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:36:35.767914 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:36:35.766883 unknown[965]: wrote ssh authorized keys file for user: core Apr 30 03:36:35.770923 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:36:35.770923 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:36:36.022528 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:36:36.110025 systemd-networkd[781]: eth0: Gained IPv6LL Apr 30 03:36:36.488143 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:36:36.488143 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:36:36.492330 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 03:36:36.622016 systemd-networkd[781]: eth1: Gained IPv6LL Apr 30 03:36:37.128454 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:36:37.306670 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:36:37.306670 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:36:37.308803 ignition[965]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:36:37.308803 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:36:37.308803 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:36:37.308803 ignition[965]: INFO : files: files passed Apr 30 03:36:37.308803 ignition[965]: INFO : Ignition finished successfully Apr 30 03:36:37.309935 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:36:37.318661 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:36:37.320986 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:36:37.322897 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:36:37.322992 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:36:37.332296 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:36:37.332296 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:36:37.334638 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:36:37.334309 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:36:37.335603 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:36:37.341984 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:36:37.370348 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:36:37.370467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:36:37.371809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:36:37.373018 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:36:37.374796 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:36:37.382085 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:36:37.395229 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:36:37.407990 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:36:37.418607 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:36:37.419572 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:36:37.421028 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:36:37.422305 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:36:37.422454 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:36:37.423858 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:36:37.424805 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:36:37.426122 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:36:37.427349 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:36:37.428595 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:36:37.429980 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:36:37.431352 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:36:37.432761 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:36:37.434082 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:36:37.435388 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:36:37.436616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:36:37.436757 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:36:37.438370 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:36:37.439728 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:36:37.441066 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:36:37.441204 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:36:37.442391 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:36:37.442530 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:36:37.444131 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:36:37.444289 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:36:37.445875 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:36:37.446012 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:36:37.448101 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:36:37.448234 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:36:37.456361 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:36:37.457079 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:36:37.457276 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:36:37.460174 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:36:37.462468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:36:37.462610 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:36:37.466784 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:36:37.466894 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:36:37.472623 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:36:37.472695 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:36:37.477334 ignition[1018]: INFO : Ignition 2.19.0 Apr 30 03:36:37.478134 ignition[1018]: INFO : Stage: umount Apr 30 03:36:37.478864 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:36:37.478864 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:36:37.486702 ignition[1018]: INFO : umount: umount passed Apr 30 03:36:37.486702 ignition[1018]: INFO : Ignition finished successfully Apr 30 03:36:37.484333 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:36:37.484455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:36:37.485375 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:36:37.485432 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:36:37.486961 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:36:37.486996 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:36:37.487752 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:36:37.487786 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:36:37.488284 systemd[1]: Stopped target network.target - Network. Apr 30 03:36:37.488679 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:36:37.488712 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:36:37.490329 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:36:37.491887 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:36:37.493189 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:36:37.493833 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:36:37.494224 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:36:37.494635 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:36:37.494665 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:36:37.495153 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:36:37.495182 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:36:37.496024 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:36:37.496069 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:36:37.496922 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:36:37.496952 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:36:37.497933 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:36:37.498891 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:36:37.500739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:36:37.501173 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:36:37.501238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:36:37.502560 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:36:37.502616 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:36:37.503114 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 30 03:36:37.509430 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:36:37.509513 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:36:37.509866 systemd-networkd[781]: eth1: DHCPv6 lease lost Apr 30 03:36:37.512141 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:36:37.512215 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:36:37.513510 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:36:37.513544 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:36:37.520877 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:36:37.522095 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:36:37.522134 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:36:37.523094 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:36:37.523125 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:36:37.524111 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:36:37.524144 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:36:37.525373 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:36:37.525408 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:36:37.527547 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:36:37.537544 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:36:37.537668 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:36:37.540345 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:36:37.540485 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:36:37.541761 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:36:37.541800 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:36:37.542747 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:36:37.542780 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:36:37.543873 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:36:37.543918 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:36:37.545414 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:36:37.545455 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:36:37.546558 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:36:37.546600 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:36:37.554196 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:36:37.554789 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:36:37.554866 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:36:37.555518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:36:37.555564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:36:37.559760 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:36:37.559879 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:36:37.560944 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:36:37.569979 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:36:37.575976 systemd[1]: Switching root. Apr 30 03:36:37.627860 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Apr 30 03:36:37.627958 systemd-journald[188]: Journal stopped Apr 30 03:36:38.750921 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:36:38.750973 kernel: SELinux: policy capability open_perms=1 Apr 30 03:36:38.750983 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:36:38.750993 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:36:38.751001 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:36:38.751010 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:36:38.751023 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:36:38.751042 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:36:38.751051 kernel: audit: type=1403 audit(1745984197.825:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:36:38.751064 systemd[1]: Successfully loaded SELinux policy in 57.479ms. Apr 30 03:36:38.751086 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.973ms. Apr 30 03:36:38.751098 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:36:38.751108 systemd[1]: Detected virtualization kvm. Apr 30 03:36:38.751118 systemd[1]: Detected architecture x86-64. Apr 30 03:36:38.751130 systemd[1]: Detected first boot. Apr 30 03:36:38.751146 systemd[1]: Hostname set to . Apr 30 03:36:38.751159 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:36:38.751173 zram_generator::config[1061]: No configuration found. Apr 30 03:36:38.751188 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:36:38.751201 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:36:38.751215 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:36:38.751229 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:36:38.751242 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:36:38.751258 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:36:38.751274 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:36:38.751287 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:36:38.751301 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:36:38.751315 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:36:38.751328 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:36:38.751343 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:36:38.751356 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:36:38.751370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:36:38.751385 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:36:38.751399 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:36:38.751409 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:36:38.751419 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:36:38.751431 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:36:38.751444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:36:38.751457 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:36:38.751471 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:36:38.751488 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:36:38.751502 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:36:38.751513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:36:38.751522 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:36:38.751534 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:36:38.751545 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:36:38.751554 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:36:38.751569 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:36:38.751582 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:36:38.751596 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:36:38.751610 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:36:38.751624 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:36:38.751637 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:36:38.751647 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:36:38.751657 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:36:38.751667 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:38.751679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:36:38.751688 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:36:38.751701 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:36:38.751714 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:36:38.751724 systemd[1]: Reached target machines.target - Containers. Apr 30 03:36:38.751735 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:36:38.751745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:36:38.751755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:36:38.751765 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:36:38.751775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:36:38.751785 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:36:38.751794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:36:38.751804 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:36:38.751826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:36:38.751838 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:36:38.751848 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:36:38.751858 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:36:38.751867 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:36:38.751877 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:36:38.751887 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:36:38.751896 kernel: fuse: init (API version 7.39) Apr 30 03:36:38.751905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:36:38.751915 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:36:38.751926 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:36:38.751946 kernel: loop: module loaded Apr 30 03:36:38.751955 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:36:38.751965 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:36:38.751977 systemd[1]: Stopped verity-setup.service. Apr 30 03:36:38.751988 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:38.751997 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:36:38.752007 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:36:38.752018 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:36:38.752038 kernel: ACPI: bus type drm_connector registered Apr 30 03:36:38.752050 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:36:38.752080 systemd-journald[1151]: Collecting audit messages is disabled. Apr 30 03:36:38.752106 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:36:38.752116 systemd-journald[1151]: Journal started Apr 30 03:36:38.752137 systemd-journald[1151]: Runtime Journal (/run/log/journal/16d1238d50e347d8ae21c7c772b06be4) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:36:38.439348 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:36:38.467152 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:36:38.467724 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:36:38.755079 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:36:38.756098 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:36:38.756804 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:36:38.757557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:36:38.758282 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:36:38.758439 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:36:38.759375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:36:38.759588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:36:38.760557 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:36:38.760729 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:36:38.761474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:36:38.761624 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:36:38.762330 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:36:38.762480 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:36:38.763213 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:36:38.763407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:36:38.764270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:36:38.765013 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:36:38.765709 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:36:38.773453 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:36:38.780913 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:36:38.784872 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:36:38.785554 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:36:38.785584 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:36:38.787683 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:36:38.793951 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:36:38.799013 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:36:38.799988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:36:38.805641 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:36:38.812156 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:36:38.812919 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:36:38.814387 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:36:38.817150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:36:38.825079 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:36:38.833703 systemd-journald[1151]: Time spent on flushing to /var/log/journal/16d1238d50e347d8ae21c7c772b06be4 is 42.906ms for 1127 entries. Apr 30 03:36:38.833703 systemd-journald[1151]: System Journal (/var/log/journal/16d1238d50e347d8ae21c7c772b06be4) is 8.0M, max 584.8M, 576.8M free. Apr 30 03:36:38.906527 systemd-journald[1151]: Received client request to flush runtime journal. Apr 30 03:36:38.906576 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:36:38.906595 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:36:38.836283 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:36:38.838964 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:36:38.844604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:36:38.846065 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:36:38.846731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:36:38.847547 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:36:38.858983 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:36:38.863062 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:36:38.865215 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:36:38.874134 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:36:38.878668 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:36:38.886200 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:36:38.909701 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:36:38.922117 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:36:38.923650 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:36:38.924927 kernel: loop1: detected capacity change from 0 to 205544 Apr 30 03:36:38.935439 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:36:38.944992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:36:38.967139 kernel: loop2: detected capacity change from 0 to 142488 Apr 30 03:36:38.971141 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 30 03:36:38.971157 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 30 03:36:38.976701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:36:39.025845 kernel: loop3: detected capacity change from 0 to 8 Apr 30 03:36:39.042151 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 03:36:39.070847 kernel: loop5: detected capacity change from 0 to 205544 Apr 30 03:36:39.094923 kernel: loop6: detected capacity change from 0 to 142488 Apr 30 03:36:39.121845 kernel: loop7: detected capacity change from 0 to 8 Apr 30 03:36:39.122918 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 03:36:39.123274 (sd-merge)[1207]: Merged extensions into '/usr'. Apr 30 03:36:39.129822 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:36:39.129927 systemd[1]: Reloading... Apr 30 03:36:39.194862 zram_generator::config[1232]: No configuration found. Apr 30 03:36:39.316429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:36:39.370868 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:36:39.372312 systemd[1]: Reloading finished in 241 ms. Apr 30 03:36:39.394301 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:36:39.395397 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:36:39.405077 systemd[1]: Starting ensure-sysext.service... Apr 30 03:36:39.407521 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:36:39.419472 systemd[1]: Reloading requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:36:39.419624 systemd[1]: Reloading... Apr 30 03:36:39.440411 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:36:39.441518 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:36:39.442258 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:36:39.442501 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Apr 30 03:36:39.442551 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Apr 30 03:36:39.445760 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:36:39.446304 systemd-tmpfiles[1277]: Skipping /boot Apr 30 03:36:39.455984 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:36:39.456103 systemd-tmpfiles[1277]: Skipping /boot Apr 30 03:36:39.524134 zram_generator::config[1301]: No configuration found. Apr 30 03:36:39.605262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:36:39.656584 systemd[1]: Reloading finished in 236 ms. Apr 30 03:36:39.685198 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:36:39.698125 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:36:39.702018 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:36:39.707712 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:36:39.717105 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:36:39.720463 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:36:39.721377 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:36:39.729320 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:39.729503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:36:39.734050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:36:39.737377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:36:39.740486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:36:39.741598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:36:39.752068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:36:39.761909 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:36:39.763173 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:39.767903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:39.768089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:36:39.768257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:36:39.768375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:39.770086 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:36:39.783727 augenrules[1376]: No rules Apr 30 03:36:39.788246 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:36:39.790657 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:36:39.793896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:36:39.794071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:36:39.795408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:36:39.795578 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:36:39.797285 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:36:39.797432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:36:39.812892 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:36:39.817402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:39.817697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:36:39.820535 systemd-udevd[1368]: Using default interface naming scheme 'v255'. Apr 30 03:36:39.824564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:36:39.826954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:36:39.834156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:36:39.837983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:36:39.838859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:36:39.838960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:39.839274 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:36:39.840941 systemd[1]: Finished ensure-sysext.service. Apr 30 03:36:39.842104 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:36:39.848513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:36:39.848714 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:36:39.858959 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:36:39.865470 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:36:39.865592 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:36:39.869504 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:36:39.870892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:36:39.872121 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:36:39.872768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:36:39.874726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:36:39.884991 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:36:39.885885 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:36:39.886893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:36:39.887219 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:36:39.889503 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:36:39.947852 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:36:39.965072 systemd-resolved[1358]: Positive Trust Anchors: Apr 30 03:36:39.966850 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:36:39.966883 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:36:39.983305 systemd-resolved[1358]: Using system hostname 'ci-4081-3-3-9-916214001e'. Apr 30 03:36:39.984682 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:36:39.986941 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:36:40.001322 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:36:40.002548 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:36:40.004976 systemd-networkd[1406]: lo: Link UP Apr 30 03:36:40.005187 systemd-networkd[1406]: lo: Gained carrier Apr 30 03:36:40.006934 systemd-timesyncd[1398]: No network connectivity, watching for changes. Apr 30 03:36:40.007452 systemd-networkd[1406]: Enumeration completed Apr 30 03:36:40.007553 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:36:40.008233 systemd[1]: Reached target network.target - Network. Apr 30 03:36:40.012110 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:40.012116 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:36:40.012688 systemd-networkd[1406]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:40.012691 systemd-networkd[1406]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:36:40.013234 systemd-networkd[1406]: eth0: Link UP Apr 30 03:36:40.013238 systemd-networkd[1406]: eth0: Gained carrier Apr 30 03:36:40.013249 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:40.014987 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:36:40.016468 systemd-networkd[1406]: eth1: Link UP Apr 30 03:36:40.016515 systemd-networkd[1406]: eth1: Gained carrier Apr 30 03:36:40.016555 systemd-networkd[1406]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:40.031097 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:40.032226 systemd-networkd[1406]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:36:40.046915 systemd-networkd[1406]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:36:40.048123 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Apr 30 03:36:40.057861 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:36:40.060179 systemd-networkd[1406]: eth0: DHCPv4 address 37.27.214.59/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:36:40.060449 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Apr 30 03:36:40.060916 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:36:40.060807 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Apr 30 03:36:40.081376 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 03:36:40.081566 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:40.081655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:36:40.086918 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:36:40.091173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:36:40.098079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:36:40.105836 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 30 03:36:40.107840 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 30 03:36:40.108972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:36:40.115830 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:36:40.128199 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:36:40.128270 kernel: [drm] features: -context_init Apr 30 03:36:40.128482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:36:40.128528 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:36:40.128542 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:36:40.129130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:36:40.129313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:36:40.129766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:36:40.130154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:36:40.135549 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:36:40.141367 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:36:40.141596 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:36:40.141684 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 03:36:40.134288 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:36:40.139339 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:36:40.139479 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:36:40.139674 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:36:40.147100 kernel: [drm] number of scanouts: 1 Apr 30 03:36:40.147127 kernel: [drm] number of cap sets: 0 Apr 30 03:36:40.150844 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1419) Apr 30 03:36:40.156832 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:36:40.159898 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 03:36:40.164841 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:36:40.167296 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 03:36:40.177955 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:36:40.204999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:36:40.215469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:36:40.217577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:36:40.217766 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:36:40.230219 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:36:40.232846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:36:40.238732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:36:40.238901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:36:40.248042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:36:40.248379 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:36:40.313290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:36:40.356437 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:36:40.363020 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:36:40.379247 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:36:40.415148 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:36:40.416243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:36:40.417610 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:36:40.417801 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:36:40.417946 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:36:40.418250 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:36:40.418431 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:36:40.418518 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:36:40.418589 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:36:40.418616 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:36:40.418679 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:36:40.420713 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:36:40.422541 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:36:40.437601 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:36:40.445054 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:36:40.446565 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:36:40.449237 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:36:40.451994 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:36:40.452997 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:36:40.453759 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:36:40.453802 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:36:40.461000 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:36:40.469708 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:36:40.476019 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:36:40.488008 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:36:40.493438 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:36:40.497181 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:36:40.500153 jq[1472]: false Apr 30 03:36:40.505709 coreos-metadata[1468]: Apr 30 03:36:40.498 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 03:36:40.505709 coreos-metadata[1468]: Apr 30 03:36:40.502 INFO Fetch successful Apr 30 03:36:40.505709 coreos-metadata[1468]: Apr 30 03:36:40.502 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 03:36:40.505709 coreos-metadata[1468]: Apr 30 03:36:40.503 INFO Fetch successful Apr 30 03:36:40.503331 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:36:40.513259 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:36:40.518063 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 03:36:40.522606 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:36:40.526885 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:36:40.534313 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:36:40.536374 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:36:40.538507 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:36:40.545986 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:36:40.549252 dbus-daemon[1469]: [system] SELinux support is enabled Apr 30 03:36:40.554953 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:36:40.556202 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:36:40.561174 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:36:40.565932 jq[1490]: true Apr 30 03:36:40.566153 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:36:40.566884 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:36:40.567230 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:36:40.567410 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:36:40.577314 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:36:40.577478 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:36:40.593178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:36:40.593208 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:36:40.595655 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:36:40.595673 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:36:40.603857 update_engine[1485]: I20250430 03:36:40.603163 1485 main.cc:92] Flatcar Update Engine starting Apr 30 03:36:40.612012 extend-filesystems[1473]: Found loop4 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found loop5 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found loop6 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found loop7 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found sda Apr 30 03:36:40.612012 extend-filesystems[1473]: Found sda1 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found sda2 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found sda3 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found usr Apr 30 03:36:40.612012 extend-filesystems[1473]: Found sda4 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found sda6 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found sda7 Apr 30 03:36:40.612012 extend-filesystems[1473]: Found sda9 Apr 30 03:36:40.612012 extend-filesystems[1473]: Checking size of /dev/sda9 Apr 30 03:36:40.699004 extend-filesystems[1473]: Resized partition /dev/sda9 Apr 30 03:36:40.712215 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 03:36:40.632972 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:36:40.712339 update_engine[1485]: I20250430 03:36:40.629619 1485 update_check_scheduler.cc:74] Next update check in 5m36s Apr 30 03:36:40.712366 extend-filesystems[1516]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:36:40.655058 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:36:40.722987 tar[1495]: linux-amd64/helm Apr 30 03:36:40.655339 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:36:40.728604 jq[1496]: true Apr 30 03:36:40.705781 systemd-logind[1480]: New seat seat0. Apr 30 03:36:40.722769 systemd-logind[1480]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 03:36:40.722792 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:36:40.723488 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:36:40.744340 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:36:40.746429 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:36:40.813367 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1413) Apr 30 03:36:40.840421 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:36:40.844299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:36:40.857251 systemd[1]: Starting sshkeys.service... Apr 30 03:36:40.890779 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 03:36:40.895082 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:36:40.906192 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:36:40.913506 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:36:40.924033 extend-filesystems[1516]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 03:36:40.924033 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 03:36:40.924033 extend-filesystems[1516]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 03:36:40.923880 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:36:40.932783 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:36:40.939102 extend-filesystems[1473]: Resized filesystem in /dev/sda9 Apr 30 03:36:40.939102 extend-filesystems[1473]: Found sr0 Apr 30 03:36:40.924806 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:36:40.944480 coreos-metadata[1551]: Apr 30 03:36:40.937 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 03:36:40.944480 coreos-metadata[1551]: Apr 30 03:36:40.938 INFO Fetch successful Apr 30 03:36:40.943699 unknown[1551]: wrote ssh authorized keys file for user: core Apr 30 03:36:40.964878 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:36:40.977680 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:36:40.997579 update-ssh-keys[1564]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:36:40.998551 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:36:41.003608 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:36:41.003759 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:36:41.006089 systemd[1]: Finished sshkeys.service. Apr 30 03:36:41.022082 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:36:41.040314 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:36:41.047656 containerd[1508]: time="2025-04-30T03:36:41.047559512Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:36:41.052298 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:36:41.055412 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:36:41.056049 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:36:41.081718 containerd[1508]: time="2025-04-30T03:36:41.081234208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:36:41.082520 containerd[1508]: time="2025-04-30T03:36:41.082487879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:36:41.082520 containerd[1508]: time="2025-04-30T03:36:41.082516222Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:36:41.082580 containerd[1508]: time="2025-04-30T03:36:41.082529737Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:36:41.082680 containerd[1508]: time="2025-04-30T03:36:41.082660934Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:36:41.082707 containerd[1508]: time="2025-04-30T03:36:41.082681262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:36:41.082798 containerd[1508]: time="2025-04-30T03:36:41.082734131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:36:41.082798 containerd[1508]: time="2025-04-30T03:36:41.082747907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:36:41.083122 containerd[1508]: time="2025-04-30T03:36:41.082913247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:36:41.083122 containerd[1508]: time="2025-04-30T03:36:41.082931451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:36:41.083122 containerd[1508]: time="2025-04-30T03:36:41.082942522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:36:41.083122 containerd[1508]: time="2025-04-30T03:36:41.082950887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:36:41.083122 containerd[1508]: time="2025-04-30T03:36:41.083010880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:36:41.083247 containerd[1508]: time="2025-04-30T03:36:41.083170198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:36:41.083270 containerd[1508]: time="2025-04-30T03:36:41.083256180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:36:41.083291 containerd[1508]: time="2025-04-30T03:36:41.083267922Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:36:41.083586 containerd[1508]: time="2025-04-30T03:36:41.083334998Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:36:41.083586 containerd[1508]: time="2025-04-30T03:36:41.083371887Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:36:41.089521 containerd[1508]: time="2025-04-30T03:36:41.089477005Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:36:41.089521 containerd[1508]: time="2025-04-30T03:36:41.089518413Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:36:41.089521 containerd[1508]: time="2025-04-30T03:36:41.089531167Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:36:41.089642 containerd[1508]: time="2025-04-30T03:36:41.089544051Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:36:41.089642 containerd[1508]: time="2025-04-30T03:36:41.089557506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:36:41.089682 containerd[1508]: time="2025-04-30T03:36:41.089660299Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.089897033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.089984828Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.089997391Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090008241Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090019713Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090039711Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090049659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090060970Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090072562Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090083041Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090093832Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090103640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090121754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090573 containerd[1508]: time="2025-04-30T03:36:41.090133396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090151470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090166208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090178100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090188750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090200232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090210391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090221071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090232632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090244023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090254463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090263811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090276785Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090296191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090306351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.090827 containerd[1508]: time="2025-04-30T03:36:41.090315147Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:36:41.091080 containerd[1508]: time="2025-04-30T03:36:41.090360312Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:36:41.091080 containerd[1508]: time="2025-04-30T03:36:41.090374378Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:36:41.091080 containerd[1508]: time="2025-04-30T03:36:41.090383455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:36:41.091080 containerd[1508]: time="2025-04-30T03:36:41.090393183Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:36:41.091080 containerd[1508]: time="2025-04-30T03:36:41.090401148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.091080 containerd[1508]: time="2025-04-30T03:36:41.090457714Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:36:41.091080 containerd[1508]: time="2025-04-30T03:36:41.090466872Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:36:41.091080 containerd[1508]: time="2025-04-30T03:36:41.090474856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:36:41.091215 containerd[1508]: time="2025-04-30T03:36:41.090696051Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:36:41.091215 containerd[1508]: time="2025-04-30T03:36:41.090744762Z" level=info msg="Connect containerd service" Apr 30 03:36:41.091215 containerd[1508]: time="2025-04-30T03:36:41.090766453Z" level=info msg="using legacy CRI server" Apr 30 03:36:41.091215 containerd[1508]: time="2025-04-30T03:36:41.090771463Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:36:41.091215 containerd[1508]: time="2025-04-30T03:36:41.090856602Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:36:41.091740 containerd[1508]: time="2025-04-30T03:36:41.091392968Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:36:41.091740 containerd[1508]: time="2025-04-30T03:36:41.091497204Z" level=info msg="Start subscribing containerd event" Apr 30 03:36:41.091740 containerd[1508]: time="2025-04-30T03:36:41.091538551Z" level=info msg="Start recovering state" Apr 30 03:36:41.091740 containerd[1508]: time="2025-04-30T03:36:41.091581822Z" level=info msg="Start event monitor" Apr 30 03:36:41.091740 containerd[1508]: time="2025-04-30T03:36:41.091594255Z" level=info msg="Start snapshots syncer" Apr 30 03:36:41.091740 containerd[1508]: time="2025-04-30T03:36:41.091601018Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:36:41.091740 containerd[1508]: time="2025-04-30T03:36:41.091607179Z" level=info msg="Start streaming server" Apr 30 03:36:41.092000 containerd[1508]: time="2025-04-30T03:36:41.091932409Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:36:41.092000 containerd[1508]: time="2025-04-30T03:36:41.091972455Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:36:41.092125 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:36:41.092509 containerd[1508]: time="2025-04-30T03:36:41.092294228Z" level=info msg="containerd successfully booted in 0.046216s" Apr 30 03:36:41.166051 systemd-networkd[1406]: eth0: Gained IPv6LL Apr 30 03:36:41.166561 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Apr 30 03:36:41.170488 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:36:41.173149 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:36:41.184411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:36:41.189151 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:36:41.224031 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:36:41.350541 tar[1495]: linux-amd64/LICENSE Apr 30 03:36:41.350541 tar[1495]: linux-amd64/README.md Apr 30 03:36:41.359313 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:36:41.422463 systemd-networkd[1406]: eth1: Gained IPv6LL Apr 30 03:36:41.422966 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Apr 30 03:36:42.251794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:36:42.256253 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:36:42.259274 systemd[1]: Startup finished in 1.499s (kernel) + 6.041s (initrd) + 4.490s (userspace) = 12.031s. Apr 30 03:36:42.262333 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:36:42.958298 kubelet[1601]: E0430 03:36:42.958230 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:36:42.961197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:36:42.961328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:36:42.961686 systemd[1]: kubelet.service: Consumed 1.207s CPU time. Apr 30 03:36:53.212640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:36:53.220096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:36:53.347088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:36:53.350362 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:36:53.406145 kubelet[1620]: E0430 03:36:53.406047 1620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:36:53.410519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:36:53.410746 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:03.661326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:37:03.668186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:03.803801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:03.819207 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:03.876376 kubelet[1636]: E0430 03:37:03.876320 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:03.879686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:03.879988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:11.700422 systemd-timesyncd[1398]: Contacted time server 167.235.139.237:123 (2.flatcar.pool.ntp.org). Apr 30 03:37:11.700484 systemd-timesyncd[1398]: Initial clock synchronization to Wed 2025-04-30 03:37:12.089391 UTC. Apr 30 03:37:14.130765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:37:14.141218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:14.285202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:14.288540 (kubelet)[1651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:14.324134 kubelet[1651]: E0430 03:37:14.324032 1651 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:14.327165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:14.327303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:24.565775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 03:37:24.571049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:24.674538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:24.678180 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:24.712439 kubelet[1667]: E0430 03:37:24.712383 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:24.714661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:24.714794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:26.274421 update_engine[1485]: I20250430 03:37:26.274284 1485 update_attempter.cc:509] Updating boot flags... Apr 30 03:37:26.322961 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1684) Apr 30 03:37:26.377686 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1688) Apr 30 03:37:34.816044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 03:37:34.824138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:34.919757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:34.924631 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:34.968651 kubelet[1701]: E0430 03:37:34.968532 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:34.971229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:34.971423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:45.065555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 03:37:45.071126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:45.172481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:45.175518 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:45.206165 kubelet[1717]: E0430 03:37:45.206093 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:45.208732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:45.208999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:37:55.315489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 03:37:55.322089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:37:55.436873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:37:55.451064 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:37:55.495777 kubelet[1732]: E0430 03:37:55.495697 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:37:55.499050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:37:55.499307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:38:05.565972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 03:38:05.572103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:38:05.665148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:38:05.668472 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:38:05.696361 kubelet[1748]: E0430 03:38:05.696286 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:38:05.699650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:38:05.699771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:38:15.815286 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 03:38:15.823401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:38:15.904648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:38:15.908184 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:38:15.947861 kubelet[1764]: E0430 03:38:15.947773 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:38:15.950726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:38:15.950892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:38:26.066045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 03:38:26.079083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:38:26.202250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:38:26.205636 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:38:26.240825 kubelet[1779]: E0430 03:38:26.240744 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:38:26.243862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:38:26.244060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:38:30.866108 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:38:30.873117 systemd[1]: Started sshd@0-37.27.214.59:22-139.178.68.195:56326.service - OpenSSH per-connection server daemon (139.178.68.195:56326). Apr 30 03:38:31.849307 sshd[1787]: Accepted publickey for core from 139.178.68.195 port 56326 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:31.851705 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:31.863114 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:38:31.868379 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:38:31.874599 systemd-logind[1480]: New session 1 of user core. Apr 30 03:38:31.889631 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:38:31.895124 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:38:31.898379 (systemd)[1791]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:38:31.998421 systemd[1791]: Queued start job for default target default.target. Apr 30 03:38:32.012740 systemd[1791]: Created slice app.slice - User Application Slice. Apr 30 03:38:32.012765 systemd[1791]: Reached target paths.target - Paths. Apr 30 03:38:32.012870 systemd[1791]: Reached target timers.target - Timers. Apr 30 03:38:32.014042 systemd[1791]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:38:32.029535 systemd[1791]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:38:32.029708 systemd[1791]: Reached target sockets.target - Sockets. Apr 30 03:38:32.029738 systemd[1791]: Reached target basic.target - Basic System. Apr 30 03:38:32.029845 systemd[1791]: Reached target default.target - Main User Target. Apr 30 03:38:32.029892 systemd[1791]: Startup finished in 126ms. Apr 30 03:38:32.030045 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:38:32.041031 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:38:32.735415 systemd[1]: Started sshd@1-37.27.214.59:22-139.178.68.195:56332.service - OpenSSH per-connection server daemon (139.178.68.195:56332). Apr 30 03:38:33.711414 sshd[1802]: Accepted publickey for core from 139.178.68.195 port 56332 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:33.713490 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:33.720939 systemd-logind[1480]: New session 2 of user core. Apr 30 03:38:33.727285 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:38:34.390995 sshd[1802]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:34.396468 systemd[1]: sshd@1-37.27.214.59:22-139.178.68.195:56332.service: Deactivated successfully. Apr 30 03:38:34.399028 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:38:34.400676 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:38:34.402767 systemd-logind[1480]: Removed session 2. Apr 30 03:38:34.563293 systemd[1]: Started sshd@2-37.27.214.59:22-139.178.68.195:56338.service - OpenSSH per-connection server daemon (139.178.68.195:56338). Apr 30 03:38:35.538739 sshd[1809]: Accepted publickey for core from 139.178.68.195 port 56338 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:35.540682 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:35.544526 systemd-logind[1480]: New session 3 of user core. Apr 30 03:38:35.554393 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:38:36.211440 sshd[1809]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:36.215431 systemd[1]: sshd@2-37.27.214.59:22-139.178.68.195:56338.service: Deactivated successfully. Apr 30 03:38:36.217454 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:38:36.218484 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:38:36.219934 systemd-logind[1480]: Removed session 3. Apr 30 03:38:36.315716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 03:38:36.323084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:38:36.385018 systemd[1]: Started sshd@3-37.27.214.59:22-139.178.68.195:36604.service - OpenSSH per-connection server daemon (139.178.68.195:36604). Apr 30 03:38:36.459577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:38:36.463510 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:38:36.495515 kubelet[1826]: E0430 03:38:36.495427 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:38:36.498871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:38:36.499006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:38:37.375293 sshd[1819]: Accepted publickey for core from 139.178.68.195 port 36604 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:37.377543 sshd[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:37.385163 systemd-logind[1480]: New session 4 of user core. Apr 30 03:38:37.393082 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:38:38.054767 sshd[1819]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:38.058719 systemd[1]: sshd@3-37.27.214.59:22-139.178.68.195:36604.service: Deactivated successfully. Apr 30 03:38:38.061009 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:38:38.061946 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:38:38.063587 systemd-logind[1480]: Removed session 4. Apr 30 03:38:38.229642 systemd[1]: Started sshd@4-37.27.214.59:22-139.178.68.195:36614.service - OpenSSH per-connection server daemon (139.178.68.195:36614). Apr 30 03:38:39.210455 sshd[1838]: Accepted publickey for core from 139.178.68.195 port 36614 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:39.212207 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:39.217357 systemd-logind[1480]: New session 5 of user core. Apr 30 03:38:39.224322 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:38:39.745290 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:38:39.745617 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:38:39.763532 sudo[1841]: pam_unix(sudo:session): session closed for user root Apr 30 03:38:39.923107 sshd[1838]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:39.929118 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:38:39.930123 systemd[1]: sshd@4-37.27.214.59:22-139.178.68.195:36614.service: Deactivated successfully. Apr 30 03:38:39.932958 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:38:39.934465 systemd-logind[1480]: Removed session 5. Apr 30 03:38:40.100536 systemd[1]: Started sshd@5-37.27.214.59:22-139.178.68.195:36616.service - OpenSSH per-connection server daemon (139.178.68.195:36616). Apr 30 03:38:41.070293 sshd[1846]: Accepted publickey for core from 139.178.68.195 port 36616 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:41.072503 sshd[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:41.078902 systemd-logind[1480]: New session 6 of user core. Apr 30 03:38:41.086033 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:38:41.592593 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:38:41.593088 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:38:41.598785 sudo[1850]: pam_unix(sudo:session): session closed for user root Apr 30 03:38:41.607440 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:38:41.607930 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:38:41.631795 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:38:41.633899 auditctl[1853]: No rules Apr 30 03:38:41.634424 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:38:41.634804 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:38:41.642483 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:38:41.687690 augenrules[1871]: No rules Apr 30 03:38:41.689006 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:38:41.691314 sudo[1849]: pam_unix(sudo:session): session closed for user root Apr 30 03:38:41.849992 sshd[1846]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:41.854073 systemd[1]: sshd@5-37.27.214.59:22-139.178.68.195:36616.service: Deactivated successfully. Apr 30 03:38:41.855736 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:38:41.858506 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:38:41.860149 systemd-logind[1480]: Removed session 6. Apr 30 03:38:42.026191 systemd[1]: Started sshd@6-37.27.214.59:22-139.178.68.195:36624.service - OpenSSH per-connection server daemon (139.178.68.195:36624). Apr 30 03:38:42.992943 sshd[1879]: Accepted publickey for core from 139.178.68.195 port 36624 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:42.994919 sshd[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:43.000404 systemd-logind[1480]: New session 7 of user core. Apr 30 03:38:43.011124 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:38:43.515202 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:38:43.515684 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:38:43.896064 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:38:43.898017 (dockerd)[1898]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:38:44.279381 dockerd[1898]: time="2025-04-30T03:38:44.279272312Z" level=info msg="Starting up" Apr 30 03:38:44.443760 dockerd[1898]: time="2025-04-30T03:38:44.443684251Z" level=info msg="Loading containers: start." Apr 30 03:38:44.580876 kernel: Initializing XFRM netlink socket Apr 30 03:38:44.682673 systemd-networkd[1406]: docker0: Link UP Apr 30 03:38:44.697627 dockerd[1898]: time="2025-04-30T03:38:44.697545969Z" level=info msg="Loading containers: done." Apr 30 03:38:44.718192 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2990699138-merged.mount: Deactivated successfully. Apr 30 03:38:44.719976 dockerd[1898]: time="2025-04-30T03:38:44.719922433Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:38:44.720183 dockerd[1898]: time="2025-04-30T03:38:44.720048425Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:38:44.720183 dockerd[1898]: time="2025-04-30T03:38:44.720155631Z" level=info msg="Daemon has completed initialization" Apr 30 03:38:44.759627 dockerd[1898]: time="2025-04-30T03:38:44.759473272Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:38:44.760000 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:38:46.296511 containerd[1508]: time="2025-04-30T03:38:46.296458896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 03:38:46.565926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 03:38:46.572318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:38:46.711018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:38:46.714196 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:38:46.762472 kubelet[2044]: E0430 03:38:46.761629 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:38:46.765220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:38:46.765454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:38:46.953418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392017647.mount: Deactivated successfully. Apr 30 03:38:48.925977 containerd[1508]: time="2025-04-30T03:38:48.925890494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:48.927419 containerd[1508]: time="2025-04-30T03:38:48.927389209Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27961081" Apr 30 03:38:48.928389 containerd[1508]: time="2025-04-30T03:38:48.928344509Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:48.930596 containerd[1508]: time="2025-04-30T03:38:48.930565404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:48.931658 containerd[1508]: time="2025-04-30T03:38:48.931525704Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.63501308s" Apr 30 03:38:48.931658 containerd[1508]: time="2025-04-30T03:38:48.931550714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 03:38:48.933078 containerd[1508]: time="2025-04-30T03:38:48.933056614Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 03:38:50.948117 containerd[1508]: time="2025-04-30T03:38:50.948062397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:50.949024 containerd[1508]: time="2025-04-30T03:38:50.948989909Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713798" Apr 30 03:38:50.949893 containerd[1508]: time="2025-04-30T03:38:50.949845673Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:50.952242 containerd[1508]: time="2025-04-30T03:38:50.952209015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:50.953330 containerd[1508]: time="2025-04-30T03:38:50.953116049Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.020035791s" Apr 30 03:38:50.953330 containerd[1508]: time="2025-04-30T03:38:50.953141685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 03:38:50.953899 containerd[1508]: time="2025-04-30T03:38:50.953879451Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 03:38:52.493734 containerd[1508]: time="2025-04-30T03:38:52.493664040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:52.494683 containerd[1508]: time="2025-04-30T03:38:52.494649335Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780408" Apr 30 03:38:52.495962 containerd[1508]: time="2025-04-30T03:38:52.495929837Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:52.498256 containerd[1508]: time="2025-04-30T03:38:52.498221839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:52.499011 containerd[1508]: time="2025-04-30T03:38:52.498992502Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.545090702s" Apr 30 03:38:52.499145 containerd[1508]: time="2025-04-30T03:38:52.499066584Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 03:38:52.499994 containerd[1508]: time="2025-04-30T03:38:52.499823482Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 03:38:53.560527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199337853.mount: Deactivated successfully. Apr 30 03:38:53.870102 containerd[1508]: time="2025-04-30T03:38:53.870025493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:53.870989 containerd[1508]: time="2025-04-30T03:38:53.870940027Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354653" Apr 30 03:38:53.871903 containerd[1508]: time="2025-04-30T03:38:53.871856634Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:53.873783 containerd[1508]: time="2025-04-30T03:38:53.873742885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:53.874614 containerd[1508]: time="2025-04-30T03:38:53.874194643Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.374347718s" Apr 30 03:38:53.874614 containerd[1508]: time="2025-04-30T03:38:53.874236587Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 03:38:53.874803 containerd[1508]: time="2025-04-30T03:38:53.874778997Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:38:54.393509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4047262695.mount: Deactivated successfully. Apr 30 03:38:55.195020 containerd[1508]: time="2025-04-30T03:38:55.194940569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:55.196174 containerd[1508]: time="2025-04-30T03:38:55.196139925Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Apr 30 03:38:55.197396 containerd[1508]: time="2025-04-30T03:38:55.197357364Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:55.200379 containerd[1508]: time="2025-04-30T03:38:55.200316474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:55.201550 containerd[1508]: time="2025-04-30T03:38:55.201409960Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.326599487s" Apr 30 03:38:55.201550 containerd[1508]: time="2025-04-30T03:38:55.201443030Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:38:55.202022 containerd[1508]: time="2025-04-30T03:38:55.201993179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:38:55.668858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608072893.mount: Deactivated successfully. Apr 30 03:38:55.678143 containerd[1508]: time="2025-04-30T03:38:55.678074931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:55.679563 containerd[1508]: time="2025-04-30T03:38:55.679494753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 30 03:38:55.681345 containerd[1508]: time="2025-04-30T03:38:55.681265086Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:55.691426 containerd[1508]: time="2025-04-30T03:38:55.691292144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:55.693055 containerd[1508]: time="2025-04-30T03:38:55.692791279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 490.757608ms" Apr 30 03:38:55.693055 containerd[1508]: time="2025-04-30T03:38:55.692919039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:38:55.693845 containerd[1508]: time="2025-04-30T03:38:55.693617074Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 03:38:56.239986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989759806.mount: Deactivated successfully. Apr 30 03:38:56.815927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 30 03:38:56.826193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:38:56.952441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:38:56.956253 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:38:56.992852 kubelet[2229]: E0430 03:38:56.992501 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:38:56.994006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:38:56.994123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:38:57.665961 containerd[1508]: time="2025-04-30T03:38:57.665903047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:57.667300 containerd[1508]: time="2025-04-30T03:38:57.667251465Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780083" Apr 30 03:38:57.668907 containerd[1508]: time="2025-04-30T03:38:57.668867247Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:57.672399 containerd[1508]: time="2025-04-30T03:38:57.672346718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:38:57.673658 containerd[1508]: time="2025-04-30T03:38:57.673477392Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.979810458s" Apr 30 03:38:57.673658 containerd[1508]: time="2025-04-30T03:38:57.673513528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 03:39:00.888334 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:39:00.895931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:39:00.925327 systemd[1]: Reloading requested from client PID 2265 ('systemctl') (unit session-7.scope)... Apr 30 03:39:00.925341 systemd[1]: Reloading... Apr 30 03:39:01.034639 zram_generator::config[2305]: No configuration found. Apr 30 03:39:01.134898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:39:01.228518 systemd[1]: Reloading finished in 302 ms. Apr 30 03:39:01.279657 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:39:01.279740 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:39:01.280217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:39:01.285699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:39:01.401452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:39:01.407156 (kubelet)[2360]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:39:01.444084 kubelet[2360]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:39:01.444084 kubelet[2360]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:39:01.444084 kubelet[2360]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:39:01.444084 kubelet[2360]: I0430 03:39:01.443290 2360 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:39:02.149911 kubelet[2360]: I0430 03:39:02.149801 2360 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 03:39:02.149911 kubelet[2360]: I0430 03:39:02.149867 2360 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:39:02.150839 kubelet[2360]: I0430 03:39:02.150389 2360 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 03:39:02.186071 kubelet[2360]: E0430 03:39:02.186008 2360 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.214.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:02.186908 kubelet[2360]: I0430 03:39:02.186875 2360 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:39:02.199973 kubelet[2360]: E0430 03:39:02.199909 2360 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:39:02.199973 kubelet[2360]: I0430 03:39:02.199958 2360 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:39:02.206240 kubelet[2360]: I0430 03:39:02.206196 2360 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:39:02.206419 kubelet[2360]: I0430 03:39:02.206327 2360 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 03:39:02.206486 kubelet[2360]: I0430 03:39:02.206447 2360 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:39:02.206771 kubelet[2360]: I0430 03:39:02.206487 2360 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-9-916214001e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:39:02.206771 kubelet[2360]: I0430 03:39:02.206759 2360 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:39:02.206771 kubelet[2360]: I0430 03:39:02.206771 2360 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 03:39:02.207056 kubelet[2360]: I0430 03:39:02.206937 2360 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:39:02.209993 kubelet[2360]: I0430 03:39:02.209674 2360 kubelet.go:408] "Attempting to sync node with API server" Apr 30 03:39:02.209993 kubelet[2360]: I0430 03:39:02.209712 2360 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:39:02.209993 kubelet[2360]: I0430 03:39:02.209748 2360 kubelet.go:314] "Adding apiserver pod source" Apr 30 03:39:02.209993 kubelet[2360]: I0430 03:39:02.209767 2360 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:39:02.220733 kubelet[2360]: W0430 03:39:02.220490 2360 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.214.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-9-916214001e&limit=500&resourceVersion=0": dial tcp 37.27.214.59:6443: connect: connection refused Apr 30 03:39:02.220733 kubelet[2360]: E0430 03:39:02.220546 2360 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.214.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-9-916214001e&limit=500&resourceVersion=0\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:02.220944 kubelet[2360]: W0430 03:39:02.220831 2360 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.214.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.214.59:6443: connect: connection refused Apr 30 03:39:02.220944 kubelet[2360]: E0430 03:39:02.220858 2360 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.214.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:02.220944 kubelet[2360]: I0430 03:39:02.220942 2360 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:39:02.224174 kubelet[2360]: I0430 03:39:02.224000 2360 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:39:02.225403 kubelet[2360]: W0430 03:39:02.225186 2360 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:39:02.228853 kubelet[2360]: I0430 03:39:02.228696 2360 server.go:1269] "Started kubelet" Apr 30 03:39:02.229864 kubelet[2360]: I0430 03:39:02.229340 2360 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:39:02.231820 kubelet[2360]: I0430 03:39:02.231637 2360 server.go:460] "Adding debug handlers to kubelet server" Apr 30 03:39:02.235114 kubelet[2360]: I0430 03:39:02.235059 2360 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:39:02.235446 kubelet[2360]: I0430 03:39:02.235435 2360 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:39:02.236092 kubelet[2360]: I0430 03:39:02.236057 2360 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:39:02.242756 kubelet[2360]: E0430 03:39:02.238149 2360 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.214.59:6443/api/v1/namespaces/default/events\": dial tcp 37.27.214.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-9-916214001e.183afb8235ef1ca8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-9-916214001e,UID:ci-4081-3-3-9-916214001e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-9-916214001e,},FirstTimestamp:2025-04-30 03:39:02.228671656 +0000 UTC m=+0.819042588,LastTimestamp:2025-04-30 03:39:02.228671656 +0000 UTC m=+0.819042588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-9-916214001e,}" Apr 30 03:39:02.243368 kubelet[2360]: I0430 03:39:02.243319 2360 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:39:02.244570 kubelet[2360]: I0430 03:39:02.244553 2360 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 03:39:02.245000 kubelet[2360]: E0430 03:39:02.244979 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:02.247769 kubelet[2360]: I0430 03:39:02.247731 2360 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 03:39:02.247997 kubelet[2360]: I0430 03:39:02.247982 2360 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:39:02.248508 kubelet[2360]: W0430 03:39:02.248462 2360 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.214.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.214.59:6443: connect: connection refused Apr 30 03:39:02.248649 kubelet[2360]: E0430 03:39:02.248615 2360 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.214.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:02.249073 kubelet[2360]: I0430 03:39:02.249051 2360 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:39:02.249242 kubelet[2360]: I0430 03:39:02.249224 2360 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:39:02.249798 kubelet[2360]: E0430 03:39:02.249770 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.214.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-9-916214001e?timeout=10s\": dial tcp 37.27.214.59:6443: connect: connection refused" interval="200ms" Apr 30 03:39:02.251228 kubelet[2360]: I0430 03:39:02.251210 2360 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:39:02.265783 kubelet[2360]: I0430 03:39:02.265726 2360 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:39:02.267146 kubelet[2360]: I0430 03:39:02.267119 2360 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:39:02.267223 kubelet[2360]: I0430 03:39:02.267160 2360 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:39:02.267223 kubelet[2360]: I0430 03:39:02.267189 2360 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 03:39:02.267293 kubelet[2360]: E0430 03:39:02.267235 2360 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:39:02.277735 kubelet[2360]: W0430 03:39:02.277669 2360 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.214.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.214.59:6443: connect: connection refused Apr 30 03:39:02.277929 kubelet[2360]: E0430 03:39:02.277743 2360 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.214.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:02.280752 kubelet[2360]: E0430 03:39:02.280468 2360 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:39:02.287697 kubelet[2360]: I0430 03:39:02.287395 2360 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:39:02.287697 kubelet[2360]: I0430 03:39:02.287413 2360 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:39:02.287697 kubelet[2360]: I0430 03:39:02.287434 2360 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:39:02.289787 kubelet[2360]: I0430 03:39:02.289683 2360 policy_none.go:49] "None policy: Start" Apr 30 03:39:02.290944 kubelet[2360]: I0430 03:39:02.290904 2360 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:39:02.291039 kubelet[2360]: I0430 03:39:02.290963 2360 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:39:02.298481 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:39:02.309540 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:39:02.324027 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:39:02.326504 kubelet[2360]: I0430 03:39:02.326479 2360 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:39:02.327632 kubelet[2360]: I0430 03:39:02.326863 2360 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:39:02.327632 kubelet[2360]: I0430 03:39:02.326882 2360 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:39:02.327632 kubelet[2360]: I0430 03:39:02.327178 2360 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:39:02.329544 kubelet[2360]: E0430 03:39:02.329414 2360 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:02.394460 systemd[1]: Created slice kubepods-burstable-pod67830cb85a2ba31982b07d85ed4f003c.slice - libcontainer container kubepods-burstable-pod67830cb85a2ba31982b07d85ed4f003c.slice. Apr 30 03:39:02.427018 systemd[1]: Created slice kubepods-burstable-podb746aa544ab32b8564b62874e1a1c705.slice - libcontainer container kubepods-burstable-podb746aa544ab32b8564b62874e1a1c705.slice. Apr 30 03:39:02.432288 kubelet[2360]: I0430 03:39:02.432234 2360 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:02.433574 kubelet[2360]: E0430 03:39:02.432726 2360 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.214.59:6443/api/v1/nodes\": dial tcp 37.27.214.59:6443: connect: connection refused" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:02.448883 kubelet[2360]: I0430 03:39:02.448750 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9f03e3aef9c3a5d5a53006a620b7e92-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-9-916214001e\" (UID: \"a9f03e3aef9c3a5d5a53006a620b7e92\") " pod="kube-system/kube-scheduler-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.448883 kubelet[2360]: I0430 03:39:02.448832 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.448883 kubelet[2360]: I0430 03:39:02.448867 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.449653 kubelet[2360]: I0430 03:39:02.448895 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67830cb85a2ba31982b07d85ed4f003c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-9-916214001e\" (UID: \"67830cb85a2ba31982b07d85ed4f003c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.449653 kubelet[2360]: I0430 03:39:02.448917 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.449653 kubelet[2360]: I0430 03:39:02.448939 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.449653 kubelet[2360]: I0430 03:39:02.448960 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.449653 kubelet[2360]: I0430 03:39:02.448982 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67830cb85a2ba31982b07d85ed4f003c-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-9-916214001e\" (UID: \"67830cb85a2ba31982b07d85ed4f003c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.449846 kubelet[2360]: I0430 03:39:02.449015 2360 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67830cb85a2ba31982b07d85ed4f003c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-9-916214001e\" (UID: \"67830cb85a2ba31982b07d85ed4f003c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-916214001e" Apr 30 03:39:02.450488 kubelet[2360]: E0430 03:39:02.450432 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.214.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-9-916214001e?timeout=10s\": dial tcp 37.27.214.59:6443: connect: connection refused" interval="400ms" Apr 30 03:39:02.454498 systemd[1]: Created slice kubepods-burstable-poda9f03e3aef9c3a5d5a53006a620b7e92.slice - libcontainer container kubepods-burstable-poda9f03e3aef9c3a5d5a53006a620b7e92.slice. Apr 30 03:39:02.636141 kubelet[2360]: I0430 03:39:02.635974 2360 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:02.636607 kubelet[2360]: E0430 03:39:02.636468 2360 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.214.59:6443/api/v1/nodes\": dial tcp 37.27.214.59:6443: connect: connection refused" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:02.725557 containerd[1508]: time="2025-04-30T03:39:02.725243440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-9-916214001e,Uid:67830cb85a2ba31982b07d85ed4f003c,Namespace:kube-system,Attempt:0,}" Apr 30 03:39:02.739499 containerd[1508]: time="2025-04-30T03:39:02.739185408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-9-916214001e,Uid:b746aa544ab32b8564b62874e1a1c705,Namespace:kube-system,Attempt:0,}" Apr 30 03:39:02.758275 containerd[1508]: time="2025-04-30T03:39:02.758018002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-9-916214001e,Uid:a9f03e3aef9c3a5d5a53006a620b7e92,Namespace:kube-system,Attempt:0,}" Apr 30 03:39:02.851868 kubelet[2360]: E0430 03:39:02.851735 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.214.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-9-916214001e?timeout=10s\": dial tcp 37.27.214.59:6443: connect: connection refused" interval="800ms" Apr 30 03:39:03.039681 kubelet[2360]: I0430 03:39:03.039568 2360 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:03.040014 kubelet[2360]: E0430 03:39:03.039910 2360 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.214.59:6443/api/v1/nodes\": dial tcp 37.27.214.59:6443: connect: connection refused" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:03.185366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520636059.mount: Deactivated successfully. Apr 30 03:39:03.196087 containerd[1508]: time="2025-04-30T03:39:03.195989118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:39:03.198222 containerd[1508]: time="2025-04-30T03:39:03.197942013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:39:03.198385 containerd[1508]: time="2025-04-30T03:39:03.198320747Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:39:03.200293 containerd[1508]: time="2025-04-30T03:39:03.200241131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:39:03.201605 containerd[1508]: time="2025-04-30T03:39:03.201574348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:39:03.201960 containerd[1508]: time="2025-04-30T03:39:03.201933177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 30 03:39:03.202592 containerd[1508]: time="2025-04-30T03:39:03.202220514Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:39:03.206264 containerd[1508]: time="2025-04-30T03:39:03.206199727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:39:03.207651 containerd[1508]: time="2025-04-30T03:39:03.207226051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.950215ms" Apr 30 03:39:03.211072 containerd[1508]: time="2025-04-30T03:39:03.210923505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.83843ms" Apr 30 03:39:03.212108 containerd[1508]: time="2025-04-30T03:39:03.212049472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.708894ms" Apr 30 03:39:03.251460 kubelet[2360]: W0430 03:39:03.251336 2360 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.214.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.214.59:6443: connect: connection refused Apr 30 03:39:03.251460 kubelet[2360]: E0430 03:39:03.251415 2360 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.214.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:03.366853 containerd[1508]: time="2025-04-30T03:39:03.366514867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:03.369847 containerd[1508]: time="2025-04-30T03:39:03.368291067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:03.369847 containerd[1508]: time="2025-04-30T03:39:03.368851024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:03.369847 containerd[1508]: time="2025-04-30T03:39:03.368869158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:03.369847 containerd[1508]: time="2025-04-30T03:39:03.368933505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:03.369847 containerd[1508]: time="2025-04-30T03:39:03.368505520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:03.369847 containerd[1508]: time="2025-04-30T03:39:03.368523052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:03.369847 containerd[1508]: time="2025-04-30T03:39:03.368604011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:03.375621 containerd[1508]: time="2025-04-30T03:39:03.375175481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:03.375621 containerd[1508]: time="2025-04-30T03:39:03.375219381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:03.375621 containerd[1508]: time="2025-04-30T03:39:03.375232336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:03.375621 containerd[1508]: time="2025-04-30T03:39:03.375295902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:03.402358 systemd[1]: Started cri-containerd-ffdb066a230693bd0788daab9b8a080a85c811afea9071febd5403cccf33d62e.scope - libcontainer container ffdb066a230693bd0788daab9b8a080a85c811afea9071febd5403cccf33d62e. Apr 30 03:39:03.406965 systemd[1]: Started cri-containerd-1978e7ce3042b8588e76911aac12d975b2bbad9e58e9d8c66a9635ec9db04578.scope - libcontainer container 1978e7ce3042b8588e76911aac12d975b2bbad9e58e9d8c66a9635ec9db04578. Apr 30 03:39:03.424000 systemd[1]: Started cri-containerd-90aa3755e08297251359aa303b205c9be9257798f293df4b159295d1c7b43b58.scope - libcontainer container 90aa3755e08297251359aa303b205c9be9257798f293df4b159295d1c7b43b58. Apr 30 03:39:03.449860 kubelet[2360]: W0430 03:39:03.448988 2360 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.214.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.214.59:6443: connect: connection refused Apr 30 03:39:03.449860 kubelet[2360]: E0430 03:39:03.449050 2360 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.214.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:03.476470 containerd[1508]: time="2025-04-30T03:39:03.476267385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-9-916214001e,Uid:67830cb85a2ba31982b07d85ed4f003c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffdb066a230693bd0788daab9b8a080a85c811afea9071febd5403cccf33d62e\"" Apr 30 03:39:03.478061 kubelet[2360]: W0430 03:39:03.477805 2360 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.214.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.214.59:6443: connect: connection refused Apr 30 03:39:03.478061 kubelet[2360]: E0430 03:39:03.478019 2360 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.214.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:03.485097 containerd[1508]: time="2025-04-30T03:39:03.485060461Z" level=info msg="CreateContainer within sandbox \"ffdb066a230693bd0788daab9b8a080a85c811afea9071febd5403cccf33d62e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:39:03.494985 containerd[1508]: time="2025-04-30T03:39:03.494864724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-9-916214001e,Uid:b746aa544ab32b8564b62874e1a1c705,Namespace:kube-system,Attempt:0,} returns sandbox id \"1978e7ce3042b8588e76911aac12d975b2bbad9e58e9d8c66a9635ec9db04578\"" Apr 30 03:39:03.498032 containerd[1508]: time="2025-04-30T03:39:03.497928356Z" level=info msg="CreateContainer within sandbox \"1978e7ce3042b8588e76911aac12d975b2bbad9e58e9d8c66a9635ec9db04578\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:39:03.500329 containerd[1508]: time="2025-04-30T03:39:03.500282046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-9-916214001e,Uid:a9f03e3aef9c3a5d5a53006a620b7e92,Namespace:kube-system,Attempt:0,} returns sandbox id \"90aa3755e08297251359aa303b205c9be9257798f293df4b159295d1c7b43b58\"" Apr 30 03:39:03.503096 containerd[1508]: time="2025-04-30T03:39:03.503051519Z" level=info msg="CreateContainer within sandbox \"90aa3755e08297251359aa303b205c9be9257798f293df4b159295d1c7b43b58\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:39:03.512788 containerd[1508]: time="2025-04-30T03:39:03.512570689Z" level=info msg="CreateContainer within sandbox \"ffdb066a230693bd0788daab9b8a080a85c811afea9071febd5403cccf33d62e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cbcdb10ca514b5b73f9f450c0dd549bb59d0b09e953dbf235deee1147aab8064\"" Apr 30 03:39:03.513539 containerd[1508]: time="2025-04-30T03:39:03.513276113Z" level=info msg="StartContainer for \"cbcdb10ca514b5b73f9f450c0dd549bb59d0b09e953dbf235deee1147aab8064\"" Apr 30 03:39:03.519452 containerd[1508]: time="2025-04-30T03:39:03.519375997Z" level=info msg="CreateContainer within sandbox \"1978e7ce3042b8588e76911aac12d975b2bbad9e58e9d8c66a9635ec9db04578\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59\"" Apr 30 03:39:03.520430 containerd[1508]: time="2025-04-30T03:39:03.520399937Z" level=info msg="StartContainer for \"b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59\"" Apr 30 03:39:03.528526 containerd[1508]: time="2025-04-30T03:39:03.528479995Z" level=info msg="CreateContainer within sandbox \"90aa3755e08297251359aa303b205c9be9257798f293df4b159295d1c7b43b58\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc\"" Apr 30 03:39:03.530442 containerd[1508]: time="2025-04-30T03:39:03.530402223Z" level=info msg="StartContainer for \"cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc\"" Apr 30 03:39:03.549114 systemd[1]: Started cri-containerd-cbcdb10ca514b5b73f9f450c0dd549bb59d0b09e953dbf235deee1147aab8064.scope - libcontainer container cbcdb10ca514b5b73f9f450c0dd549bb59d0b09e953dbf235deee1147aab8064. Apr 30 03:39:03.566019 systemd[1]: Started cri-containerd-b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59.scope - libcontainer container b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59. Apr 30 03:39:03.574034 systemd[1]: Started cri-containerd-cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc.scope - libcontainer container cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc. Apr 30 03:39:03.609487 containerd[1508]: time="2025-04-30T03:39:03.609112702Z" level=info msg="StartContainer for \"cbcdb10ca514b5b73f9f450c0dd549bb59d0b09e953dbf235deee1147aab8064\" returns successfully" Apr 30 03:39:03.639092 containerd[1508]: time="2025-04-30T03:39:03.638132560Z" level=info msg="StartContainer for \"b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59\" returns successfully" Apr 30 03:39:03.640433 containerd[1508]: time="2025-04-30T03:39:03.640384903Z" level=info msg="StartContainer for \"cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc\" returns successfully" Apr 30 03:39:03.653297 kubelet[2360]: E0430 03:39:03.653235 2360 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.214.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-9-916214001e?timeout=10s\": dial tcp 37.27.214.59:6443: connect: connection refused" interval="1.6s" Apr 30 03:39:03.753267 kubelet[2360]: W0430 03:39:03.753192 2360 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.214.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-9-916214001e&limit=500&resourceVersion=0": dial tcp 37.27.214.59:6443: connect: connection refused Apr 30 03:39:03.753392 kubelet[2360]: E0430 03:39:03.753281 2360 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.214.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-9-916214001e&limit=500&resourceVersion=0\": dial tcp 37.27.214.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:39:03.841902 kubelet[2360]: I0430 03:39:03.841788 2360 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:03.844836 kubelet[2360]: E0430 03:39:03.842185 2360 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.214.59:6443/api/v1/nodes\": dial tcp 37.27.214.59:6443: connect: connection refused" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:05.338542 kubelet[2360]: E0430 03:39:05.338479 2360 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-9-916214001e\" not found" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:05.447014 kubelet[2360]: I0430 03:39:05.446958 2360 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:05.462455 kubelet[2360]: I0430 03:39:05.461932 2360 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:05.462455 kubelet[2360]: E0430 03:39:05.461990 2360 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-9-916214001e\": node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:05.471975 kubelet[2360]: E0430 03:39:05.471936 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:05.572411 kubelet[2360]: E0430 03:39:05.572359 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:05.673349 kubelet[2360]: E0430 03:39:05.673158 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:05.774335 kubelet[2360]: E0430 03:39:05.774264 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:05.874718 kubelet[2360]: E0430 03:39:05.874626 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:05.975729 kubelet[2360]: E0430 03:39:05.975521 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:06.076360 kubelet[2360]: E0430 03:39:06.076270 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:06.176938 kubelet[2360]: E0430 03:39:06.176879 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:06.277971 kubelet[2360]: E0430 03:39:06.277761 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:06.378421 kubelet[2360]: E0430 03:39:06.378339 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:06.479299 kubelet[2360]: E0430 03:39:06.479240 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:06.579966 kubelet[2360]: E0430 03:39:06.579917 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:06.680513 kubelet[2360]: E0430 03:39:06.680451 2360 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:07.224494 kubelet[2360]: I0430 03:39:07.224447 2360 apiserver.go:52] "Watching apiserver" Apr 30 03:39:07.249067 kubelet[2360]: I0430 03:39:07.248975 2360 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 03:39:07.392988 systemd[1]: Reloading requested from client PID 2633 ('systemctl') (unit session-7.scope)... Apr 30 03:39:07.393014 systemd[1]: Reloading... Apr 30 03:39:07.498878 zram_generator::config[2670]: No configuration found. Apr 30 03:39:07.612790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:39:07.706273 systemd[1]: Reloading finished in 312 ms. Apr 30 03:39:07.747592 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:39:07.761484 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:39:07.761667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:39:07.761714 systemd[1]: kubelet.service: Consumed 1.231s CPU time, 114.1M memory peak, 0B memory swap peak. Apr 30 03:39:07.769069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:39:07.884409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:39:07.890623 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:39:07.994361 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:39:07.997096 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:39:07.997096 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:39:07.997096 kubelet[2724]: I0430 03:39:07.994772 2724 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:39:08.002096 kubelet[2724]: I0430 03:39:08.002045 2724 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 03:39:08.002096 kubelet[2724]: I0430 03:39:08.002091 2724 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:39:08.002639 kubelet[2724]: I0430 03:39:08.002449 2724 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 03:39:08.006446 kubelet[2724]: I0430 03:39:08.006366 2724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:39:08.012103 kubelet[2724]: I0430 03:39:08.012047 2724 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:39:08.020413 kubelet[2724]: E0430 03:39:08.020364 2724 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:39:08.020413 kubelet[2724]: I0430 03:39:08.020406 2724 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:39:08.025872 kubelet[2724]: I0430 03:39:08.025537 2724 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:39:08.026615 kubelet[2724]: I0430 03:39:08.026577 2724 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 03:39:08.026750 kubelet[2724]: I0430 03:39:08.026707 2724 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:39:08.027702 kubelet[2724]: I0430 03:39:08.026748 2724 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-9-916214001e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:39:08.027702 kubelet[2724]: I0430 03:39:08.027418 2724 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:39:08.027702 kubelet[2724]: I0430 03:39:08.027432 2724 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 03:39:08.027702 kubelet[2724]: I0430 03:39:08.027475 2724 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:39:08.027702 kubelet[2724]: I0430 03:39:08.027591 2724 kubelet.go:408] "Attempting to sync node with API server" Apr 30 03:39:08.028046 kubelet[2724]: I0430 03:39:08.027605 2724 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:39:08.028541 kubelet[2724]: I0430 03:39:08.028511 2724 kubelet.go:314] "Adding apiserver pod source" Apr 30 03:39:08.028603 kubelet[2724]: I0430 03:39:08.028543 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:39:08.032413 kubelet[2724]: I0430 03:39:08.032377 2724 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:39:08.034026 kubelet[2724]: I0430 03:39:08.033231 2724 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:39:08.034653 kubelet[2724]: I0430 03:39:08.034638 2724 server.go:1269] "Started kubelet" Apr 30 03:39:08.036693 kubelet[2724]: I0430 03:39:08.036678 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:39:08.046240 kubelet[2724]: I0430 03:39:08.046185 2724 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:39:08.051136 kubelet[2724]: I0430 03:39:08.051106 2724 server.go:460] "Adding debug handlers to kubelet server" Apr 30 03:39:08.052825 kubelet[2724]: I0430 03:39:08.052314 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:39:08.053227 kubelet[2724]: I0430 03:39:08.053210 2724 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:39:08.055875 kubelet[2724]: I0430 03:39:08.053928 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:39:08.058857 kubelet[2724]: E0430 03:39:08.057234 2724 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-9-916214001e\" not found" Apr 30 03:39:08.058857 kubelet[2724]: I0430 03:39:08.057306 2724 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 03:39:08.058857 kubelet[2724]: I0430 03:39:08.057489 2724 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 03:39:08.058857 kubelet[2724]: I0430 03:39:08.057652 2724 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:39:08.058857 kubelet[2724]: I0430 03:39:08.058635 2724 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:39:08.064776 kubelet[2724]: I0430 03:39:08.062307 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:39:08.066368 kubelet[2724]: E0430 03:39:08.066329 2724 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:39:08.067634 kubelet[2724]: I0430 03:39:08.067344 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:39:08.067634 kubelet[2724]: I0430 03:39:08.067367 2724 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:39:08.067634 kubelet[2724]: I0430 03:39:08.067381 2724 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 03:39:08.067634 kubelet[2724]: E0430 03:39:08.067419 2724 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:39:08.076212 kubelet[2724]: I0430 03:39:08.076130 2724 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:39:08.076365 kubelet[2724]: I0430 03:39:08.076347 2724 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:39:08.147124 kubelet[2724]: I0430 03:39:08.146923 2724 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:39:08.147124 kubelet[2724]: I0430 03:39:08.147013 2724 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:39:08.147124 kubelet[2724]: I0430 03:39:08.147080 2724 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:39:08.149364 kubelet[2724]: I0430 03:39:08.148040 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:39:08.149364 kubelet[2724]: I0430 03:39:08.148082 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:39:08.149364 kubelet[2724]: I0430 03:39:08.148109 2724 policy_none.go:49] "None policy: Start" Apr 30 03:39:08.150117 kubelet[2724]: I0430 03:39:08.149626 2724 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:39:08.150117 kubelet[2724]: I0430 03:39:08.149659 2724 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:39:08.150117 kubelet[2724]: I0430 03:39:08.149961 2724 state_mem.go:75] "Updated machine memory state" Apr 30 03:39:08.159207 kubelet[2724]: I0430 03:39:08.159171 2724 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:39:08.159996 kubelet[2724]: I0430 03:39:08.159542 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:39:08.160359 kubelet[2724]: I0430 03:39:08.160243 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:39:08.161228 kubelet[2724]: I0430 03:39:08.160906 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:39:08.183563 kubelet[2724]: E0430 03:39:08.183407 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-9-916214001e\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259657 kubelet[2724]: I0430 03:39:08.259544 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67830cb85a2ba31982b07d85ed4f003c-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-9-916214001e\" (UID: \"67830cb85a2ba31982b07d85ed4f003c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259657 kubelet[2724]: I0430 03:39:08.259592 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259657 kubelet[2724]: I0430 03:39:08.259616 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259657 kubelet[2724]: I0430 03:39:08.259642 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67830cb85a2ba31982b07d85ed4f003c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-9-916214001e\" (UID: \"67830cb85a2ba31982b07d85ed4f003c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259657 kubelet[2724]: I0430 03:39:08.259661 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67830cb85a2ba31982b07d85ed4f003c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-9-916214001e\" (UID: \"67830cb85a2ba31982b07d85ed4f003c\") " pod="kube-system/kube-apiserver-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259992 kubelet[2724]: I0430 03:39:08.259678 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259992 kubelet[2724]: I0430 03:39:08.259695 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259992 kubelet[2724]: I0430 03:39:08.259719 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b746aa544ab32b8564b62874e1a1c705-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-9-916214001e\" (UID: \"b746aa544ab32b8564b62874e1a1c705\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.259992 kubelet[2724]: I0430 03:39:08.259740 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9f03e3aef9c3a5d5a53006a620b7e92-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-9-916214001e\" (UID: \"a9f03e3aef9c3a5d5a53006a620b7e92\") " pod="kube-system/kube-scheduler-ci-4081-3-3-9-916214001e" Apr 30 03:39:08.273384 kubelet[2724]: I0430 03:39:08.273339 2724 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:08.283425 kubelet[2724]: I0430 03:39:08.283380 2724 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:08.283912 kubelet[2724]: I0430 03:39:08.283504 2724 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-9-916214001e" Apr 30 03:39:09.030318 kubelet[2724]: I0430 03:39:09.029858 2724 apiserver.go:52] "Watching apiserver" Apr 30 03:39:09.057704 kubelet[2724]: I0430 03:39:09.057630 2724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 03:39:09.136871 kubelet[2724]: E0430 03:39:09.136829 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-9-916214001e\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-9-916214001e" Apr 30 03:39:09.164846 kubelet[2724]: I0430 03:39:09.164024 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-9-916214001e" podStartSLOduration=1.164004933 podStartE2EDuration="1.164004933s" podCreationTimestamp="2025-04-30 03:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:09.156176892 +0000 UTC m=+1.260537952" watchObservedRunningTime="2025-04-30 03:39:09.164004933 +0000 UTC m=+1.268365992" Apr 30 03:39:09.174367 kubelet[2724]: I0430 03:39:09.174307 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-9-916214001e" podStartSLOduration=1.174287173 podStartE2EDuration="1.174287173s" podCreationTimestamp="2025-04-30 03:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:09.164382445 +0000 UTC m=+1.268743504" watchObservedRunningTime="2025-04-30 03:39:09.174287173 +0000 UTC m=+1.278648222" Apr 30 03:39:09.189306 kubelet[2724]: I0430 03:39:09.188933 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-9-916214001e" podStartSLOduration=2.188909254 podStartE2EDuration="2.188909254s" podCreationTimestamp="2025-04-30 03:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:09.174579656 +0000 UTC m=+1.278940715" watchObservedRunningTime="2025-04-30 03:39:09.188909254 +0000 UTC m=+1.293270334" Apr 30 03:39:12.751692 kubelet[2724]: I0430 03:39:12.751648 2724 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:39:12.752435 kubelet[2724]: I0430 03:39:12.752247 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:39:12.752472 containerd[1508]: time="2025-04-30T03:39:12.752024867Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:39:13.329326 systemd[1]: Created slice kubepods-besteffort-podf4fbdc9e_6942_4a57_960b_69d5003e17b3.slice - libcontainer container kubepods-besteffort-podf4fbdc9e_6942_4a57_960b_69d5003e17b3.slice. Apr 30 03:39:13.393407 kubelet[2724]: I0430 03:39:13.393031 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4fbdc9e-6942-4a57-960b-69d5003e17b3-kube-proxy\") pod \"kube-proxy-b6mlx\" (UID: \"f4fbdc9e-6942-4a57-960b-69d5003e17b3\") " pod="kube-system/kube-proxy-b6mlx" Apr 30 03:39:13.393407 kubelet[2724]: I0430 03:39:13.393097 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4fbdc9e-6942-4a57-960b-69d5003e17b3-lib-modules\") pod \"kube-proxy-b6mlx\" (UID: \"f4fbdc9e-6942-4a57-960b-69d5003e17b3\") " pod="kube-system/kube-proxy-b6mlx" Apr 30 03:39:13.393407 kubelet[2724]: I0430 03:39:13.393118 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjdhm\" (UniqueName: \"kubernetes.io/projected/f4fbdc9e-6942-4a57-960b-69d5003e17b3-kube-api-access-wjdhm\") pod \"kube-proxy-b6mlx\" (UID: \"f4fbdc9e-6942-4a57-960b-69d5003e17b3\") " pod="kube-system/kube-proxy-b6mlx" Apr 30 03:39:13.393407 kubelet[2724]: I0430 03:39:13.393379 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4fbdc9e-6942-4a57-960b-69d5003e17b3-xtables-lock\") pod \"kube-proxy-b6mlx\" (UID: \"f4fbdc9e-6942-4a57-960b-69d5003e17b3\") " pod="kube-system/kube-proxy-b6mlx" Apr 30 03:39:13.404023 sudo[1882]: pam_unix(sudo:session): session closed for user root Apr 30 03:39:13.506728 kubelet[2724]: E0430 03:39:13.506639 2724 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 03:39:13.506728 kubelet[2724]: E0430 03:39:13.506695 2724 projected.go:194] Error preparing data for projected volume kube-api-access-wjdhm for pod kube-system/kube-proxy-b6mlx: configmap "kube-root-ca.crt" not found Apr 30 03:39:13.506728 kubelet[2724]: E0430 03:39:13.506807 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4fbdc9e-6942-4a57-960b-69d5003e17b3-kube-api-access-wjdhm podName:f4fbdc9e-6942-4a57-960b-69d5003e17b3 nodeName:}" failed. No retries permitted until 2025-04-30 03:39:14.006767069 +0000 UTC m=+6.111128148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wjdhm" (UniqueName: "kubernetes.io/projected/f4fbdc9e-6942-4a57-960b-69d5003e17b3-kube-api-access-wjdhm") pod "kube-proxy-b6mlx" (UID: "f4fbdc9e-6942-4a57-960b-69d5003e17b3") : configmap "kube-root-ca.crt" not found Apr 30 03:39:13.563362 sshd[1879]: pam_unix(sshd:session): session closed for user core Apr 30 03:39:13.568750 systemd[1]: sshd@6-37.27.214.59:22-139.178.68.195:36624.service: Deactivated successfully. Apr 30 03:39:13.572163 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:39:13.572447 systemd[1]: session-7.scope: Consumed 5.271s CPU time, 147.0M memory peak, 0B memory swap peak. Apr 30 03:39:13.573714 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:39:13.575197 systemd-logind[1480]: Removed session 7. Apr 30 03:39:13.656780 systemd[1]: Created slice kubepods-besteffort-pod02b9b27d_5e65_4f84_9a41_66774b06e58d.slice - libcontainer container kubepods-besteffort-pod02b9b27d_5e65_4f84_9a41_66774b06e58d.slice. Apr 30 03:39:13.695727 kubelet[2724]: I0430 03:39:13.695622 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxr4h\" (UniqueName: \"kubernetes.io/projected/02b9b27d-5e65-4f84-9a41-66774b06e58d-kube-api-access-qxr4h\") pod \"tigera-operator-6f6897fdc5-lkrnh\" (UID: \"02b9b27d-5e65-4f84-9a41-66774b06e58d\") " pod="tigera-operator/tigera-operator-6f6897fdc5-lkrnh" Apr 30 03:39:13.695727 kubelet[2724]: I0430 03:39:13.695716 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02b9b27d-5e65-4f84-9a41-66774b06e58d-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-lkrnh\" (UID: \"02b9b27d-5e65-4f84-9a41-66774b06e58d\") " pod="tigera-operator/tigera-operator-6f6897fdc5-lkrnh" Apr 30 03:39:13.962326 containerd[1508]: time="2025-04-30T03:39:13.961969236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-lkrnh,Uid:02b9b27d-5e65-4f84-9a41-66774b06e58d,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:39:13.995796 containerd[1508]: time="2025-04-30T03:39:13.995606735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:13.995796 containerd[1508]: time="2025-04-30T03:39:13.995724025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:13.995796 containerd[1508]: time="2025-04-30T03:39:13.995748030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:13.996228 containerd[1508]: time="2025-04-30T03:39:13.995931713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:14.024266 systemd[1]: Started cri-containerd-39e83a759c895a08c1ed72c3e27a41f7260a3e25bb6aee390754d8847316e8b6.scope - libcontainer container 39e83a759c895a08c1ed72c3e27a41f7260a3e25bb6aee390754d8847316e8b6. Apr 30 03:39:14.079631 containerd[1508]: time="2025-04-30T03:39:14.079560469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-lkrnh,Uid:02b9b27d-5e65-4f84-9a41-66774b06e58d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"39e83a759c895a08c1ed72c3e27a41f7260a3e25bb6aee390754d8847316e8b6\"" Apr 30 03:39:14.082648 containerd[1508]: time="2025-04-30T03:39:14.082036877Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:39:14.241475 containerd[1508]: time="2025-04-30T03:39:14.241267263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6mlx,Uid:f4fbdc9e-6942-4a57-960b-69d5003e17b3,Namespace:kube-system,Attempt:0,}" Apr 30 03:39:14.276085 containerd[1508]: time="2025-04-30T03:39:14.275457938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:14.277368 containerd[1508]: time="2025-04-30T03:39:14.277233043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:14.277368 containerd[1508]: time="2025-04-30T03:39:14.277283808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:14.277606 containerd[1508]: time="2025-04-30T03:39:14.277450840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:14.302052 systemd[1]: Started cri-containerd-dcede17c022a8b409112e21c38768c7dc5a775d7e439f2dd0161a0decba9ee74.scope - libcontainer container dcede17c022a8b409112e21c38768c7dc5a775d7e439f2dd0161a0decba9ee74. Apr 30 03:39:14.340956 containerd[1508]: time="2025-04-30T03:39:14.340870957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6mlx,Uid:f4fbdc9e-6942-4a57-960b-69d5003e17b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcede17c022a8b409112e21c38768c7dc5a775d7e439f2dd0161a0decba9ee74\"" Apr 30 03:39:14.345060 containerd[1508]: time="2025-04-30T03:39:14.344857164Z" level=info msg="CreateContainer within sandbox \"dcede17c022a8b409112e21c38768c7dc5a775d7e439f2dd0161a0decba9ee74\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:39:14.367368 containerd[1508]: time="2025-04-30T03:39:14.367312376Z" level=info msg="CreateContainer within sandbox \"dcede17c022a8b409112e21c38768c7dc5a775d7e439f2dd0161a0decba9ee74\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f31ed93596cf518399fe69540248e9f63b291c8718a9dba34d13aa1bda1fa65\"" Apr 30 03:39:14.368913 containerd[1508]: time="2025-04-30T03:39:14.368180211Z" level=info msg="StartContainer for \"6f31ed93596cf518399fe69540248e9f63b291c8718a9dba34d13aa1bda1fa65\"" Apr 30 03:39:14.398985 systemd[1]: Started cri-containerd-6f31ed93596cf518399fe69540248e9f63b291c8718a9dba34d13aa1bda1fa65.scope - libcontainer container 6f31ed93596cf518399fe69540248e9f63b291c8718a9dba34d13aa1bda1fa65. Apr 30 03:39:14.426689 containerd[1508]: time="2025-04-30T03:39:14.426236872Z" level=info msg="StartContainer for \"6f31ed93596cf518399fe69540248e9f63b291c8718a9dba34d13aa1bda1fa65\" returns successfully" Apr 30 03:39:17.242317 kubelet[2724]: I0430 03:39:17.241659 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b6mlx" podStartSLOduration=4.241627041 podStartE2EDuration="4.241627041s" podCreationTimestamp="2025-04-30 03:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:15.174044613 +0000 UTC m=+7.278405682" watchObservedRunningTime="2025-04-30 03:39:17.241627041 +0000 UTC m=+9.345988140" Apr 30 03:39:19.913475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2220474760.mount: Deactivated successfully. Apr 30 03:39:20.303582 containerd[1508]: time="2025-04-30T03:39:20.303453837Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:20.304493 containerd[1508]: time="2025-04-30T03:39:20.304454378Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:39:20.305394 containerd[1508]: time="2025-04-30T03:39:20.305357974Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:20.307278 containerd[1508]: time="2025-04-30T03:39:20.307256400Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:20.308207 containerd[1508]: time="2025-04-30T03:39:20.307793404Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 6.225685743s" Apr 30 03:39:20.308207 containerd[1508]: time="2025-04-30T03:39:20.307838579Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:39:20.309346 containerd[1508]: time="2025-04-30T03:39:20.309326279Z" level=info msg="CreateContainer within sandbox \"39e83a759c895a08c1ed72c3e27a41f7260a3e25bb6aee390754d8847316e8b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:39:20.327455 containerd[1508]: time="2025-04-30T03:39:20.325160559Z" level=info msg="CreateContainer within sandbox \"39e83a759c895a08c1ed72c3e27a41f7260a3e25bb6aee390754d8847316e8b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3\"" Apr 30 03:39:20.327611 containerd[1508]: time="2025-04-30T03:39:20.327567305Z" level=info msg="StartContainer for \"fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3\"" Apr 30 03:39:20.357999 systemd[1]: Started cri-containerd-fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3.scope - libcontainer container fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3. Apr 30 03:39:20.380195 containerd[1508]: time="2025-04-30T03:39:20.380077032Z" level=info msg="StartContainer for \"fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3\" returns successfully" Apr 30 03:39:21.165014 kubelet[2724]: I0430 03:39:21.164794 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-lkrnh" podStartSLOduration=1.937717413 podStartE2EDuration="8.164774976s" podCreationTimestamp="2025-04-30 03:39:13 +0000 UTC" firstStartedPulling="2025-04-30 03:39:14.081433238 +0000 UTC m=+6.185794297" lastFinishedPulling="2025-04-30 03:39:20.308490802 +0000 UTC m=+12.412851860" observedRunningTime="2025-04-30 03:39:21.16470833 +0000 UTC m=+13.269069399" watchObservedRunningTime="2025-04-30 03:39:21.164774976 +0000 UTC m=+13.269136285" Apr 30 03:39:23.586689 systemd[1]: Created slice kubepods-besteffort-podccd4cf20_8ce3_4e99_9ad5_b987c991c3f5.slice - libcontainer container kubepods-besteffort-podccd4cf20_8ce3_4e99_9ad5_b987c991c3f5.slice. Apr 30 03:39:23.664170 kubelet[2724]: I0430 03:39:23.663799 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvq7\" (UniqueName: \"kubernetes.io/projected/ccd4cf20-8ce3-4e99-9ad5-b987c991c3f5-kube-api-access-2qvq7\") pod \"calico-typha-c77f87b8b-r5c62\" (UID: \"ccd4cf20-8ce3-4e99-9ad5-b987c991c3f5\") " pod="calico-system/calico-typha-c77f87b8b-r5c62" Apr 30 03:39:23.664901 kubelet[2724]: I0430 03:39:23.664875 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ccd4cf20-8ce3-4e99-9ad5-b987c991c3f5-typha-certs\") pod \"calico-typha-c77f87b8b-r5c62\" (UID: \"ccd4cf20-8ce3-4e99-9ad5-b987c991c3f5\") " pod="calico-system/calico-typha-c77f87b8b-r5c62" Apr 30 03:39:23.665056 kubelet[2724]: I0430 03:39:23.665046 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccd4cf20-8ce3-4e99-9ad5-b987c991c3f5-tigera-ca-bundle\") pod \"calico-typha-c77f87b8b-r5c62\" (UID: \"ccd4cf20-8ce3-4e99-9ad5-b987c991c3f5\") " pod="calico-system/calico-typha-c77f87b8b-r5c62" Apr 30 03:39:23.691733 systemd[1]: Created slice kubepods-besteffort-pod646601a6_f327_4fe9_a753_d53f46443cfd.slice - libcontainer container kubepods-besteffort-pod646601a6_f327_4fe9_a753_d53f46443cfd.slice. Apr 30 03:39:23.766021 kubelet[2724]: I0430 03:39:23.765983 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-lib-modules\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.766237 kubelet[2724]: I0430 03:39:23.766227 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-xtables-lock\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.766334 kubelet[2724]: I0430 03:39:23.766325 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-var-lib-calico\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.766480 kubelet[2724]: I0430 03:39:23.766426 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94sk\" (UniqueName: \"kubernetes.io/projected/646601a6-f327-4fe9-a753-d53f46443cfd-kube-api-access-j94sk\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.766480 kubelet[2724]: I0430 03:39:23.766457 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-cni-bin-dir\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.767261 kubelet[2724]: I0430 03:39:23.766868 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/646601a6-f327-4fe9-a753-d53f46443cfd-node-certs\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.767261 kubelet[2724]: I0430 03:39:23.766904 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-cni-net-dir\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.767261 kubelet[2724]: I0430 03:39:23.766925 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-flexvol-driver-host\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.767261 kubelet[2724]: I0430 03:39:23.766947 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/646601a6-f327-4fe9-a753-d53f46443cfd-tigera-ca-bundle\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.767261 kubelet[2724]: I0430 03:39:23.766961 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-var-run-calico\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.767385 kubelet[2724]: I0430 03:39:23.766975 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-cni-log-dir\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.767385 kubelet[2724]: I0430 03:39:23.766988 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/646601a6-f327-4fe9-a753-d53f46443cfd-policysync\") pod \"calico-node-5p2ph\" (UID: \"646601a6-f327-4fe9-a753-d53f46443cfd\") " pod="calico-system/calico-node-5p2ph" Apr 30 03:39:23.820423 kubelet[2724]: E0430 03:39:23.820361 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-djr25" podUID="740e49cc-cf63-4a6e-96f3-c534b1ee3390" Apr 30 03:39:23.868210 kubelet[2724]: I0430 03:39:23.867353 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/740e49cc-cf63-4a6e-96f3-c534b1ee3390-socket-dir\") pod \"csi-node-driver-djr25\" (UID: \"740e49cc-cf63-4a6e-96f3-c534b1ee3390\") " pod="calico-system/csi-node-driver-djr25" Apr 30 03:39:23.868210 kubelet[2724]: I0430 03:39:23.867409 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/740e49cc-cf63-4a6e-96f3-c534b1ee3390-varrun\") pod \"csi-node-driver-djr25\" (UID: \"740e49cc-cf63-4a6e-96f3-c534b1ee3390\") " pod="calico-system/csi-node-driver-djr25" Apr 30 03:39:23.868210 kubelet[2724]: I0430 03:39:23.867461 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/740e49cc-cf63-4a6e-96f3-c534b1ee3390-kubelet-dir\") pod \"csi-node-driver-djr25\" (UID: \"740e49cc-cf63-4a6e-96f3-c534b1ee3390\") " pod="calico-system/csi-node-driver-djr25" Apr 30 03:39:23.868210 kubelet[2724]: I0430 03:39:23.867483 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99sg8\" (UniqueName: \"kubernetes.io/projected/740e49cc-cf63-4a6e-96f3-c534b1ee3390-kube-api-access-99sg8\") pod \"csi-node-driver-djr25\" (UID: \"740e49cc-cf63-4a6e-96f3-c534b1ee3390\") " pod="calico-system/csi-node-driver-djr25" Apr 30 03:39:23.868210 kubelet[2724]: I0430 03:39:23.867524 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/740e49cc-cf63-4a6e-96f3-c534b1ee3390-registration-dir\") pod \"csi-node-driver-djr25\" (UID: \"740e49cc-cf63-4a6e-96f3-c534b1ee3390\") " pod="calico-system/csi-node-driver-djr25" Apr 30 03:39:23.875952 kubelet[2724]: E0430 03:39:23.875411 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.876979 kubelet[2724]: W0430 03:39:23.876926 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.876979 kubelet[2724]: E0430 03:39:23.876952 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.884907 kubelet[2724]: E0430 03:39:23.884869 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.884907 kubelet[2724]: W0430 03:39:23.884890 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.884907 kubelet[2724]: E0430 03:39:23.884908 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.909348 containerd[1508]: time="2025-04-30T03:39:23.909289900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c77f87b8b-r5c62,Uid:ccd4cf20-8ce3-4e99-9ad5-b987c991c3f5,Namespace:calico-system,Attempt:0,}" Apr 30 03:39:23.937819 containerd[1508]: time="2025-04-30T03:39:23.936899380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:23.937819 containerd[1508]: time="2025-04-30T03:39:23.937427020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:23.937819 containerd[1508]: time="2025-04-30T03:39:23.937440776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:23.937819 containerd[1508]: time="2025-04-30T03:39:23.937600348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:23.955109 systemd[1]: Started cri-containerd-f3c348ed55d97929e9277afd1ff47f2b0e5afa9906bc2c0e558ba585a4fc9933.scope - libcontainer container f3c348ed55d97929e9277afd1ff47f2b0e5afa9906bc2c0e558ba585a4fc9933. Apr 30 03:39:23.969002 kubelet[2724]: E0430 03:39:23.968953 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.969002 kubelet[2724]: W0430 03:39:23.968988 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.969145 kubelet[2724]: E0430 03:39:23.969012 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.969274 kubelet[2724]: E0430 03:39:23.969235 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.969274 kubelet[2724]: W0430 03:39:23.969252 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.969711 kubelet[2724]: E0430 03:39:23.969273 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.970166 kubelet[2724]: E0430 03:39:23.970142 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.970166 kubelet[2724]: W0430 03:39:23.970161 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.970238 kubelet[2724]: E0430 03:39:23.970176 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.970955 kubelet[2724]: E0430 03:39:23.970928 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.970955 kubelet[2724]: W0430 03:39:23.970944 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.971017 kubelet[2724]: E0430 03:39:23.970965 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.971576 kubelet[2724]: E0430 03:39:23.971555 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.971576 kubelet[2724]: W0430 03:39:23.971571 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.971795 kubelet[2724]: E0430 03:39:23.971774 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.972474 kubelet[2724]: E0430 03:39:23.972453 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.972474 kubelet[2724]: W0430 03:39:23.972468 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.972633 kubelet[2724]: E0430 03:39:23.972612 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.972717 kubelet[2724]: E0430 03:39:23.972650 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.972717 kubelet[2724]: W0430 03:39:23.972685 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.973165 kubelet[2724]: E0430 03:39:23.973143 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.973254 kubelet[2724]: E0430 03:39:23.973235 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.973254 kubelet[2724]: W0430 03:39:23.973249 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.973539 kubelet[2724]: E0430 03:39:23.973518 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.973660 kubelet[2724]: E0430 03:39:23.973640 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.973660 kubelet[2724]: W0430 03:39:23.973654 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.974039 kubelet[2724]: E0430 03:39:23.974017 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.974316 kubelet[2724]: E0430 03:39:23.974296 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.974316 kubelet[2724]: W0430 03:39:23.974312 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.974594 kubelet[2724]: E0430 03:39:23.974573 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.974784 kubelet[2724]: E0430 03:39:23.974766 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.974784 kubelet[2724]: W0430 03:39:23.974778 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.974958 kubelet[2724]: E0430 03:39:23.974941 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.975366 kubelet[2724]: E0430 03:39:23.975330 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.975366 kubelet[2724]: W0430 03:39:23.975346 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.975521 kubelet[2724]: E0430 03:39:23.975489 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.975865 kubelet[2724]: E0430 03:39:23.975808 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.975865 kubelet[2724]: W0430 03:39:23.975862 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.975963 kubelet[2724]: E0430 03:39:23.975943 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.976303 kubelet[2724]: E0430 03:39:23.976283 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.976366 kubelet[2724]: W0430 03:39:23.976307 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.976574 kubelet[2724]: E0430 03:39:23.976551 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.977022 kubelet[2724]: E0430 03:39:23.977007 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.977022 kubelet[2724]: W0430 03:39:23.977019 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.977171 kubelet[2724]: E0430 03:39:23.977099 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.977440 kubelet[2724]: E0430 03:39:23.977418 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.977440 kubelet[2724]: W0430 03:39:23.977433 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.977589 kubelet[2724]: E0430 03:39:23.977567 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.978093 kubelet[2724]: E0430 03:39:23.978074 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.978093 kubelet[2724]: W0430 03:39:23.978088 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.978715 kubelet[2724]: E0430 03:39:23.978687 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.978775 kubelet[2724]: E0430 03:39:23.978745 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.978775 kubelet[2724]: W0430 03:39:23.978770 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.979702 kubelet[2724]: E0430 03:39:23.979678 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.980131 kubelet[2724]: E0430 03:39:23.980112 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.980131 kubelet[2724]: W0430 03:39:23.980128 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.980588 kubelet[2724]: E0430 03:39:23.980490 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.980588 kubelet[2724]: W0430 03:39:23.980527 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.980588 kubelet[2724]: E0430 03:39:23.980533 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.980588 kubelet[2724]: E0430 03:39:23.980550 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.981277 kubelet[2724]: E0430 03:39:23.981263 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.981426 kubelet[2724]: W0430 03:39:23.981341 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.981937 kubelet[2724]: E0430 03:39:23.981861 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.981937 kubelet[2724]: W0430 03:39:23.981870 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.982222 kubelet[2724]: E0430 03:39:23.982145 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.982222 kubelet[2724]: W0430 03:39:23.982153 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.982365 kubelet[2724]: E0430 03:39:23.982347 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.982780 kubelet[2724]: W0430 03:39:23.982431 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.982780 kubelet[2724]: E0430 03:39:23.982449 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.982780 kubelet[2724]: E0430 03:39:23.982681 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.982780 kubelet[2724]: E0430 03:39:23.982702 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.982780 kubelet[2724]: E0430 03:39:23.982710 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.983478 kubelet[2724]: E0430 03:39:23.983470 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.983551 kubelet[2724]: W0430 03:39:23.983522 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.983551 kubelet[2724]: E0430 03:39:23.983532 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:23.995247 containerd[1508]: time="2025-04-30T03:39:23.995016885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5p2ph,Uid:646601a6-f327-4fe9-a753-d53f46443cfd,Namespace:calico-system,Attempt:0,}" Apr 30 03:39:23.995921 kubelet[2724]: E0430 03:39:23.995896 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:23.997284 kubelet[2724]: W0430 03:39:23.995920 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:23.997284 kubelet[2724]: E0430 03:39:23.995942 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:24.018919 containerd[1508]: time="2025-04-30T03:39:24.018747150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c77f87b8b-r5c62,Uid:ccd4cf20-8ce3-4e99-9ad5-b987c991c3f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3c348ed55d97929e9277afd1ff47f2b0e5afa9906bc2c0e558ba585a4fc9933\"" Apr 30 03:39:24.021717 containerd[1508]: time="2025-04-30T03:39:24.021681158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:39:24.051961 containerd[1508]: time="2025-04-30T03:39:24.051617595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:24.051961 containerd[1508]: time="2025-04-30T03:39:24.051679602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:24.051961 containerd[1508]: time="2025-04-30T03:39:24.051694270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:24.051961 containerd[1508]: time="2025-04-30T03:39:24.051781495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:24.071050 systemd[1]: Started cri-containerd-cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78.scope - libcontainer container cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78. Apr 30 03:39:24.100511 containerd[1508]: time="2025-04-30T03:39:24.100459881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5p2ph,Uid:646601a6-f327-4fe9-a753-d53f46443cfd,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78\"" Apr 30 03:39:26.020489 containerd[1508]: time="2025-04-30T03:39:26.020440175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:26.021717 containerd[1508]: time="2025-04-30T03:39:26.021683581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:39:26.022968 containerd[1508]: time="2025-04-30T03:39:26.022929921Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:26.025075 containerd[1508]: time="2025-04-30T03:39:26.025058911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:26.025625 containerd[1508]: time="2025-04-30T03:39:26.025504528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.003689466s" Apr 30 03:39:26.025625 containerd[1508]: time="2025-04-30T03:39:26.025532942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:39:26.035512 containerd[1508]: time="2025-04-30T03:39:26.034933802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:39:26.049172 containerd[1508]: time="2025-04-30T03:39:26.049141995Z" level=info msg="CreateContainer within sandbox \"f3c348ed55d97929e9277afd1ff47f2b0e5afa9906bc2c0e558ba585a4fc9933\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:39:26.063622 containerd[1508]: time="2025-04-30T03:39:26.063585045Z" level=info msg="CreateContainer within sandbox \"f3c348ed55d97929e9277afd1ff47f2b0e5afa9906bc2c0e558ba585a4fc9933\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"27aa0ad66793a31e79141457815d01df9078af4deabd61c22acfd6e148427c20\"" Apr 30 03:39:26.067314 containerd[1508]: time="2025-04-30T03:39:26.067112655Z" level=info msg="StartContainer for \"27aa0ad66793a31e79141457815d01df9078af4deabd61c22acfd6e148427c20\"" Apr 30 03:39:26.082615 kubelet[2724]: E0430 03:39:26.082066 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-djr25" podUID="740e49cc-cf63-4a6e-96f3-c534b1ee3390" Apr 30 03:39:26.105999 systemd[1]: Started cri-containerd-27aa0ad66793a31e79141457815d01df9078af4deabd61c22acfd6e148427c20.scope - libcontainer container 27aa0ad66793a31e79141457815d01df9078af4deabd61c22acfd6e148427c20. Apr 30 03:39:26.142986 containerd[1508]: time="2025-04-30T03:39:26.142909706Z" level=info msg="StartContainer for \"27aa0ad66793a31e79141457815d01df9078af4deabd61c22acfd6e148427c20\" returns successfully" Apr 30 03:39:26.222222 kubelet[2724]: I0430 03:39:26.221894 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c77f87b8b-r5c62" podStartSLOduration=1.208274122 podStartE2EDuration="3.221873491s" podCreationTimestamp="2025-04-30 03:39:23 +0000 UTC" firstStartedPulling="2025-04-30 03:39:24.02102305 +0000 UTC m=+16.125384109" lastFinishedPulling="2025-04-30 03:39:26.03462242 +0000 UTC m=+18.138983478" observedRunningTime="2025-04-30 03:39:26.219881452 +0000 UTC m=+18.324242531" watchObservedRunningTime="2025-04-30 03:39:26.221873491 +0000 UTC m=+18.326234560" Apr 30 03:39:26.276531 kubelet[2724]: E0430 03:39:26.276355 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.276531 kubelet[2724]: W0430 03:39:26.276431 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.276737 kubelet[2724]: E0430 03:39:26.276666 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.278382 kubelet[2724]: E0430 03:39:26.278371 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.278510 kubelet[2724]: W0430 03:39:26.278436 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.278510 kubelet[2724]: E0430 03:39:26.278450 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.278613 kubelet[2724]: E0430 03:39:26.278606 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.278690 kubelet[2724]: W0430 03:39:26.278649 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.278690 kubelet[2724]: E0430 03:39:26.278659 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.278973 kubelet[2724]: E0430 03:39:26.278885 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.278973 kubelet[2724]: W0430 03:39:26.278894 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.278973 kubelet[2724]: E0430 03:39:26.278904 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.279121 kubelet[2724]: E0430 03:39:26.279081 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.279121 kubelet[2724]: W0430 03:39:26.279089 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.279121 kubelet[2724]: E0430 03:39:26.279097 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.279389 kubelet[2724]: E0430 03:39:26.279304 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.279389 kubelet[2724]: W0430 03:39:26.279313 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.279389 kubelet[2724]: E0430 03:39:26.279320 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.279559 kubelet[2724]: E0430 03:39:26.279492 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.279559 kubelet[2724]: W0430 03:39:26.279500 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.279559 kubelet[2724]: E0430 03:39:26.279507 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.279711 kubelet[2724]: E0430 03:39:26.279673 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.279711 kubelet[2724]: W0430 03:39:26.279681 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.279711 kubelet[2724]: E0430 03:39:26.279689 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.280055 kubelet[2724]: E0430 03:39:26.279982 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.280055 kubelet[2724]: W0430 03:39:26.279990 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.280055 kubelet[2724]: E0430 03:39:26.280000 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.280201 kubelet[2724]: E0430 03:39:26.280161 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.280201 kubelet[2724]: W0430 03:39:26.280169 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.280201 kubelet[2724]: E0430 03:39:26.280176 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.280443 kubelet[2724]: E0430 03:39:26.280392 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.280443 kubelet[2724]: W0430 03:39:26.280400 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.280443 kubelet[2724]: E0430 03:39:26.280407 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.280712 kubelet[2724]: E0430 03:39:26.280629 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.280712 kubelet[2724]: W0430 03:39:26.280637 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.280712 kubelet[2724]: E0430 03:39:26.280645 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.280938 kubelet[2724]: E0430 03:39:26.280885 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.280938 kubelet[2724]: W0430 03:39:26.280895 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.280938 kubelet[2724]: E0430 03:39:26.280907 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.281191 kubelet[2724]: E0430 03:39:26.281124 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.281191 kubelet[2724]: W0430 03:39:26.281132 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.281191 kubelet[2724]: E0430 03:39:26.281139 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.281493 kubelet[2724]: E0430 03:39:26.281432 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.281493 kubelet[2724]: W0430 03:39:26.281446 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.281493 kubelet[2724]: E0430 03:39:26.281453 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.290875 kubelet[2724]: E0430 03:39:26.290850 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.291133 kubelet[2724]: W0430 03:39:26.291001 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.291133 kubelet[2724]: E0430 03:39:26.291028 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.291307 kubelet[2724]: E0430 03:39:26.291221 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.291307 kubelet[2724]: W0430 03:39:26.291231 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.291307 kubelet[2724]: E0430 03:39:26.291245 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.291410 kubelet[2724]: E0430 03:39:26.291402 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.291452 kubelet[2724]: W0430 03:39:26.291445 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.291497 kubelet[2724]: E0430 03:39:26.291491 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.291640 kubelet[2724]: E0430 03:39:26.291622 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.291640 kubelet[2724]: W0430 03:39:26.291635 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.291691 kubelet[2724]: E0430 03:39:26.291647 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.291847 kubelet[2724]: E0430 03:39:26.291827 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.291847 kubelet[2724]: W0430 03:39:26.291839 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.291902 kubelet[2724]: E0430 03:39:26.291850 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.292008 kubelet[2724]: E0430 03:39:26.291996 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.292008 kubelet[2724]: W0430 03:39:26.292005 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.292055 kubelet[2724]: E0430 03:39:26.292014 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.292310 kubelet[2724]: E0430 03:39:26.292289 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.292310 kubelet[2724]: W0430 03:39:26.292300 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.292310 kubelet[2724]: E0430 03:39:26.292307 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.292440 kubelet[2724]: E0430 03:39:26.292421 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.292440 kubelet[2724]: W0430 03:39:26.292432 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.292542 kubelet[2724]: E0430 03:39:26.292440 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.292609 kubelet[2724]: E0430 03:39:26.292600 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.292689 kubelet[2724]: W0430 03:39:26.292644 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.292689 kubelet[2724]: E0430 03:39:26.292655 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.296435 kubelet[2724]: E0430 03:39:26.296253 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.296435 kubelet[2724]: W0430 03:39:26.296263 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.296435 kubelet[2724]: E0430 03:39:26.296274 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.296555 kubelet[2724]: E0430 03:39:26.296538 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.296555 kubelet[2724]: W0430 03:39:26.296549 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.296602 kubelet[2724]: E0430 03:39:26.296598 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.296724 kubelet[2724]: E0430 03:39:26.296715 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.296724 kubelet[2724]: W0430 03:39:26.296723 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.296865 kubelet[2724]: E0430 03:39:26.296844 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.297070 kubelet[2724]: E0430 03:39:26.297056 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.297070 kubelet[2724]: W0430 03:39:26.297065 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.297127 kubelet[2724]: E0430 03:39:26.297075 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.297229 kubelet[2724]: E0430 03:39:26.297214 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.297229 kubelet[2724]: W0430 03:39:26.297224 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.297287 kubelet[2724]: E0430 03:39:26.297239 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.297416 kubelet[2724]: E0430 03:39:26.297361 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.297416 kubelet[2724]: W0430 03:39:26.297368 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.297416 kubelet[2724]: E0430 03:39:26.297377 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.297535 kubelet[2724]: E0430 03:39:26.297516 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.297535 kubelet[2724]: W0430 03:39:26.297526 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.297584 kubelet[2724]: E0430 03:39:26.297541 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.298234 kubelet[2724]: E0430 03:39:26.298214 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.298234 kubelet[2724]: W0430 03:39:26.298228 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.298291 kubelet[2724]: E0430 03:39:26.298237 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:26.298799 kubelet[2724]: E0430 03:39:26.298775 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:26.298799 kubelet[2724]: W0430 03:39:26.298792 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:26.298875 kubelet[2724]: E0430 03:39:26.298801 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.188248 kubelet[2724]: I0430 03:39:27.188183 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:39:27.287755 kubelet[2724]: E0430 03:39:27.287689 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.288189 kubelet[2724]: W0430 03:39:27.287873 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.288189 kubelet[2724]: E0430 03:39:27.287909 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.288987 kubelet[2724]: E0430 03:39:27.288689 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.288987 kubelet[2724]: W0430 03:39:27.288708 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.288987 kubelet[2724]: E0430 03:39:27.288743 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.289515 kubelet[2724]: E0430 03:39:27.289299 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.289515 kubelet[2724]: W0430 03:39:27.289314 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.289515 kubelet[2724]: E0430 03:39:27.289331 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.290212 kubelet[2724]: E0430 03:39:27.290170 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.290212 kubelet[2724]: W0430 03:39:27.290198 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.290212 kubelet[2724]: E0430 03:39:27.290217 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.290508 kubelet[2724]: E0430 03:39:27.290486 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.290508 kubelet[2724]: W0430 03:39:27.290504 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.290611 kubelet[2724]: E0430 03:39:27.290519 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.290863 kubelet[2724]: E0430 03:39:27.290793 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.290938 kubelet[2724]: W0430 03:39:27.290869 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.290938 kubelet[2724]: E0430 03:39:27.290885 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.291208 kubelet[2724]: E0430 03:39:27.291170 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.291208 kubelet[2724]: W0430 03:39:27.291195 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.291336 kubelet[2724]: E0430 03:39:27.291216 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.291762 kubelet[2724]: E0430 03:39:27.291639 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.291762 kubelet[2724]: W0430 03:39:27.291658 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.291762 kubelet[2724]: E0430 03:39:27.291675 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.292067 kubelet[2724]: E0430 03:39:27.292046 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.292067 kubelet[2724]: W0430 03:39:27.292064 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.292297 kubelet[2724]: E0430 03:39:27.292080 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.292369 kubelet[2724]: E0430 03:39:27.292312 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.292369 kubelet[2724]: W0430 03:39:27.292324 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.292369 kubelet[2724]: E0430 03:39:27.292340 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.292589 kubelet[2724]: E0430 03:39:27.292559 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.292589 kubelet[2724]: W0430 03:39:27.292586 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.292589 kubelet[2724]: E0430 03:39:27.292601 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.292964 kubelet[2724]: E0430 03:39:27.292887 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.292964 kubelet[2724]: W0430 03:39:27.292906 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.292964 kubelet[2724]: E0430 03:39:27.292921 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.293568 kubelet[2724]: E0430 03:39:27.293285 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.293568 kubelet[2724]: W0430 03:39:27.293406 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.293568 kubelet[2724]: E0430 03:39:27.293433 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.294022 kubelet[2724]: E0430 03:39:27.293998 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.294022 kubelet[2724]: W0430 03:39:27.294020 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.294283 kubelet[2724]: E0430 03:39:27.294078 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.295119 kubelet[2724]: E0430 03:39:27.295064 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.295119 kubelet[2724]: W0430 03:39:27.295088 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.295119 kubelet[2724]: E0430 03:39:27.295109 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.299098 kubelet[2724]: E0430 03:39:27.299067 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.299713 kubelet[2724]: W0430 03:39:27.299293 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.299713 kubelet[2724]: E0430 03:39:27.299357 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.300033 kubelet[2724]: E0430 03:39:27.299743 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.300033 kubelet[2724]: W0430 03:39:27.299760 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.300033 kubelet[2724]: E0430 03:39:27.299777 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.300257 kubelet[2724]: E0430 03:39:27.300138 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.300257 kubelet[2724]: W0430 03:39:27.300156 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.300257 kubelet[2724]: E0430 03:39:27.300179 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.300690 kubelet[2724]: E0430 03:39:27.300520 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.300690 kubelet[2724]: W0430 03:39:27.300536 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.300690 kubelet[2724]: E0430 03:39:27.300566 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.301183 kubelet[2724]: E0430 03:39:27.300806 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.301183 kubelet[2724]: W0430 03:39:27.300899 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.301183 kubelet[2724]: E0430 03:39:27.300935 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.301301 kubelet[2724]: E0430 03:39:27.301222 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.301301 kubelet[2724]: W0430 03:39:27.301236 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.301301 kubelet[2724]: E0430 03:39:27.301271 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.301697 kubelet[2724]: E0430 03:39:27.301672 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.301697 kubelet[2724]: W0430 03:39:27.301696 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.302020 kubelet[2724]: E0430 03:39:27.301886 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.302345 kubelet[2724]: E0430 03:39:27.302317 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.302345 kubelet[2724]: W0430 03:39:27.302341 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.302745 kubelet[2724]: E0430 03:39:27.302462 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.302745 kubelet[2724]: E0430 03:39:27.302634 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.302745 kubelet[2724]: W0430 03:39:27.302646 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.302947 kubelet[2724]: E0430 03:39:27.302894 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.303260 kubelet[2724]: E0430 03:39:27.303226 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.303260 kubelet[2724]: W0430 03:39:27.303249 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.303342 kubelet[2724]: E0430 03:39:27.303308 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.303790 kubelet[2724]: E0430 03:39:27.303757 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.303790 kubelet[2724]: W0430 03:39:27.303781 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.303927 kubelet[2724]: E0430 03:39:27.303808 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.304458 kubelet[2724]: E0430 03:39:27.304427 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.304458 kubelet[2724]: W0430 03:39:27.304450 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.304660 kubelet[2724]: E0430 03:39:27.304584 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.304755 kubelet[2724]: E0430 03:39:27.304719 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.304755 kubelet[2724]: W0430 03:39:27.304742 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.305071 kubelet[2724]: E0430 03:39:27.304775 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.305316 kubelet[2724]: E0430 03:39:27.305272 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.305316 kubelet[2724]: W0430 03:39:27.305300 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.305498 kubelet[2724]: E0430 03:39:27.305332 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.305914 kubelet[2724]: E0430 03:39:27.305890 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.306003 kubelet[2724]: W0430 03:39:27.305937 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.306003 kubelet[2724]: E0430 03:39:27.305950 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.306424 kubelet[2724]: E0430 03:39:27.306402 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.306424 kubelet[2724]: W0430 03:39:27.306417 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.306424 kubelet[2724]: E0430 03:39:27.306430 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.306716 kubelet[2724]: E0430 03:39:27.306664 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.306716 kubelet[2724]: W0430 03:39:27.306697 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.306716 kubelet[2724]: E0430 03:39:27.306708 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.307926 kubelet[2724]: E0430 03:39:27.307897 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:39:27.307926 kubelet[2724]: W0430 03:39:27.307912 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:39:27.307926 kubelet[2724]: E0430 03:39:27.307923 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:39:27.569306 containerd[1508]: time="2025-04-30T03:39:27.569264223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:27.570022 containerd[1508]: time="2025-04-30T03:39:27.569983923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:39:27.571645 containerd[1508]: time="2025-04-30T03:39:27.570951284Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:27.572597 containerd[1508]: time="2025-04-30T03:39:27.572579845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:27.573443 containerd[1508]: time="2025-04-30T03:39:27.573424044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.538465404s" Apr 30 03:39:27.573510 containerd[1508]: time="2025-04-30T03:39:27.573498906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:39:27.609279 containerd[1508]: time="2025-04-30T03:39:27.608962474Z" level=info msg="CreateContainer within sandbox \"cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:39:27.624633 containerd[1508]: time="2025-04-30T03:39:27.624537306Z" level=info msg="CreateContainer within sandbox \"cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78\"" Apr 30 03:39:27.628307 containerd[1508]: time="2025-04-30T03:39:27.628278209Z" level=info msg="StartContainer for \"56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78\"" Apr 30 03:39:27.665112 systemd[1]: Started cri-containerd-56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78.scope - libcontainer container 56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78. Apr 30 03:39:27.706563 containerd[1508]: time="2025-04-30T03:39:27.705921523Z" level=info msg="StartContainer for \"56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78\" returns successfully" Apr 30 03:39:27.723637 systemd[1]: cri-containerd-56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78.scope: Deactivated successfully. Apr 30 03:39:27.753958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78-rootfs.mount: Deactivated successfully. Apr 30 03:39:27.808399 containerd[1508]: time="2025-04-30T03:39:27.808345936Z" level=info msg="shim disconnected" id=56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78 namespace=k8s.io Apr 30 03:39:27.808746 containerd[1508]: time="2025-04-30T03:39:27.808616442Z" level=warning msg="cleaning up after shim disconnected" id=56a50bd2e7c162b013829269ddd6f832c03354b87b43bfe55642fc02f54eeb78 namespace=k8s.io Apr 30 03:39:27.808746 containerd[1508]: time="2025-04-30T03:39:27.808626120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:39:28.069963 kubelet[2724]: E0430 03:39:28.068842 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-djr25" podUID="740e49cc-cf63-4a6e-96f3-c534b1ee3390" Apr 30 03:39:28.194228 containerd[1508]: time="2025-04-30T03:39:28.194168340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:39:30.068161 kubelet[2724]: E0430 03:39:30.068086 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-djr25" podUID="740e49cc-cf63-4a6e-96f3-c534b1ee3390" Apr 30 03:39:31.220977 containerd[1508]: time="2025-04-30T03:39:31.220895755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:31.222456 containerd[1508]: time="2025-04-30T03:39:31.222399209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:39:31.224835 containerd[1508]: time="2025-04-30T03:39:31.223719693Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:31.226780 containerd[1508]: time="2025-04-30T03:39:31.226734935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:31.227483 containerd[1508]: time="2025-04-30T03:39:31.227457666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.033241315s" Apr 30 03:39:31.227578 containerd[1508]: time="2025-04-30T03:39:31.227561715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:39:31.229784 containerd[1508]: time="2025-04-30T03:39:31.229753803Z" level=info msg="CreateContainer within sandbox \"cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:39:31.250566 containerd[1508]: time="2025-04-30T03:39:31.250505451Z" level=info msg="CreateContainer within sandbox \"cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c\"" Apr 30 03:39:31.251311 containerd[1508]: time="2025-04-30T03:39:31.251291644Z" level=info msg="StartContainer for \"2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c\"" Apr 30 03:39:31.348025 systemd[1]: Started cri-containerd-2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c.scope - libcontainer container 2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c. Apr 30 03:39:31.375478 containerd[1508]: time="2025-04-30T03:39:31.375037028Z" level=info msg="StartContainer for \"2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c\" returns successfully" Apr 30 03:39:31.856454 systemd[1]: cri-containerd-2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c.scope: Deactivated successfully. Apr 30 03:39:31.890166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c-rootfs.mount: Deactivated successfully. Apr 30 03:39:31.911217 kubelet[2724]: I0430 03:39:31.910656 2724 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 03:39:31.914604 containerd[1508]: time="2025-04-30T03:39:31.914143157Z" level=info msg="shim disconnected" id=2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c namespace=k8s.io Apr 30 03:39:31.914604 containerd[1508]: time="2025-04-30T03:39:31.914209904Z" level=warning msg="cleaning up after shim disconnected" id=2347eb2890a29714b6127b1d74bc107cbfc8806e4f3073dc9013274b4ac16a2c namespace=k8s.io Apr 30 03:39:31.914604 containerd[1508]: time="2025-04-30T03:39:31.914217379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:39:31.949718 kubelet[2724]: I0430 03:39:31.948672 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/007e272b-6dc6-4988-a54b-c1109f76b258-config-volume\") pod \"coredns-6f6b679f8f-pwlb4\" (UID: \"007e272b-6dc6-4988-a54b-c1109f76b258\") " pod="kube-system/coredns-6f6b679f8f-pwlb4" Apr 30 03:39:31.949718 kubelet[2724]: I0430 03:39:31.948756 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5rc\" (UniqueName: \"kubernetes.io/projected/007e272b-6dc6-4988-a54b-c1109f76b258-kube-api-access-ft5rc\") pod \"coredns-6f6b679f8f-pwlb4\" (UID: \"007e272b-6dc6-4988-a54b-c1109f76b258\") " pod="kube-system/coredns-6f6b679f8f-pwlb4" Apr 30 03:39:31.949718 kubelet[2724]: I0430 03:39:31.948774 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cbd4a0f-2d60-4cb4-95f0-be6674a91f19-tigera-ca-bundle\") pod \"calico-kube-controllers-5c4fd78ccc-2pshn\" (UID: \"5cbd4a0f-2d60-4cb4-95f0-be6674a91f19\") " pod="calico-system/calico-kube-controllers-5c4fd78ccc-2pshn" Apr 30 03:39:31.949718 kubelet[2724]: I0430 03:39:31.948789 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbcfm\" (UniqueName: \"kubernetes.io/projected/5cbd4a0f-2d60-4cb4-95f0-be6674a91f19-kube-api-access-qbcfm\") pod \"calico-kube-controllers-5c4fd78ccc-2pshn\" (UID: \"5cbd4a0f-2d60-4cb4-95f0-be6674a91f19\") " pod="calico-system/calico-kube-controllers-5c4fd78ccc-2pshn" Apr 30 03:39:31.959228 systemd[1]: Created slice kubepods-burstable-pod007e272b_6dc6_4988_a54b_c1109f76b258.slice - libcontainer container kubepods-burstable-pod007e272b_6dc6_4988_a54b_c1109f76b258.slice. Apr 30 03:39:31.970145 systemd[1]: Created slice kubepods-besteffort-pod5cbd4a0f_2d60_4cb4_95f0_be6674a91f19.slice - libcontainer container kubepods-besteffort-pod5cbd4a0f_2d60_4cb4_95f0_be6674a91f19.slice. Apr 30 03:39:31.979178 systemd[1]: Created slice kubepods-burstable-pod76c02922_bd45_4555_87c6_4dd092116a95.slice - libcontainer container kubepods-burstable-pod76c02922_bd45_4555_87c6_4dd092116a95.slice. Apr 30 03:39:31.986520 systemd[1]: Created slice kubepods-besteffort-podda5d34c4_8951_46b2_a56c_5044ac1ce046.slice - libcontainer container kubepods-besteffort-podda5d34c4_8951_46b2_a56c_5044ac1ce046.slice. Apr 30 03:39:31.992421 systemd[1]: Created slice kubepods-besteffort-pod919a621f_1bcc_46da_9865_aeb53a85cd70.slice - libcontainer container kubepods-besteffort-pod919a621f_1bcc_46da_9865_aeb53a85cd70.slice. Apr 30 03:39:32.050243 kubelet[2724]: I0430 03:39:32.049228 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h95ck\" (UniqueName: \"kubernetes.io/projected/da5d34c4-8951-46b2-a56c-5044ac1ce046-kube-api-access-h95ck\") pod \"calico-apiserver-7665d48578-fgcbp\" (UID: \"da5d34c4-8951-46b2-a56c-5044ac1ce046\") " pod="calico-apiserver/calico-apiserver-7665d48578-fgcbp" Apr 30 03:39:32.050243 kubelet[2724]: I0430 03:39:32.049304 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/919a621f-1bcc-46da-9865-aeb53a85cd70-calico-apiserver-certs\") pod \"calico-apiserver-7665d48578-kr6rd\" (UID: \"919a621f-1bcc-46da-9865-aeb53a85cd70\") " pod="calico-apiserver/calico-apiserver-7665d48578-kr6rd" Apr 30 03:39:32.050243 kubelet[2724]: I0430 03:39:32.049329 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggf5k\" (UniqueName: \"kubernetes.io/projected/919a621f-1bcc-46da-9865-aeb53a85cd70-kube-api-access-ggf5k\") pod \"calico-apiserver-7665d48578-kr6rd\" (UID: \"919a621f-1bcc-46da-9865-aeb53a85cd70\") " pod="calico-apiserver/calico-apiserver-7665d48578-kr6rd" Apr 30 03:39:32.050243 kubelet[2724]: I0430 03:39:32.049369 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpz6f\" (UniqueName: \"kubernetes.io/projected/76c02922-bd45-4555-87c6-4dd092116a95-kube-api-access-hpz6f\") pod \"coredns-6f6b679f8f-9gr6g\" (UID: \"76c02922-bd45-4555-87c6-4dd092116a95\") " pod="kube-system/coredns-6f6b679f8f-9gr6g" Apr 30 03:39:32.050243 kubelet[2724]: I0430 03:39:32.049442 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76c02922-bd45-4555-87c6-4dd092116a95-config-volume\") pod \"coredns-6f6b679f8f-9gr6g\" (UID: \"76c02922-bd45-4555-87c6-4dd092116a95\") " pod="kube-system/coredns-6f6b679f8f-9gr6g" Apr 30 03:39:32.051315 kubelet[2724]: I0430 03:39:32.049469 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da5d34c4-8951-46b2-a56c-5044ac1ce046-calico-apiserver-certs\") pod \"calico-apiserver-7665d48578-fgcbp\" (UID: \"da5d34c4-8951-46b2-a56c-5044ac1ce046\") " pod="calico-apiserver/calico-apiserver-7665d48578-fgcbp" Apr 30 03:39:32.079158 systemd[1]: Created slice kubepods-besteffort-pod740e49cc_cf63_4a6e_96f3_c534b1ee3390.slice - libcontainer container kubepods-besteffort-pod740e49cc_cf63_4a6e_96f3_c534b1ee3390.slice. Apr 30 03:39:32.082315 containerd[1508]: time="2025-04-30T03:39:32.082261826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-djr25,Uid:740e49cc-cf63-4a6e-96f3-c534b1ee3390,Namespace:calico-system,Attempt:0,}" Apr 30 03:39:32.214305 containerd[1508]: time="2025-04-30T03:39:32.211589540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:39:32.295510 containerd[1508]: time="2025-04-30T03:39:32.295479217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7665d48578-kr6rd,Uid:919a621f-1bcc-46da-9865-aeb53a85cd70,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:39:32.296901 containerd[1508]: time="2025-04-30T03:39:32.296389709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pwlb4,Uid:007e272b-6dc6-4988-a54b-c1109f76b258,Namespace:kube-system,Attempt:0,}" Apr 30 03:39:32.297224 containerd[1508]: time="2025-04-30T03:39:32.296423363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4fd78ccc-2pshn,Uid:5cbd4a0f-2d60-4cb4-95f0-be6674a91f19,Namespace:calico-system,Attempt:0,}" Apr 30 03:39:32.297379 containerd[1508]: time="2025-04-30T03:39:32.296450906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9gr6g,Uid:76c02922-bd45-4555-87c6-4dd092116a95,Namespace:kube-system,Attempt:0,}" Apr 30 03:39:32.297696 containerd[1508]: time="2025-04-30T03:39:32.296476835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7665d48578-fgcbp,Uid:da5d34c4-8951-46b2-a56c-5044ac1ce046,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:39:32.359937 containerd[1508]: time="2025-04-30T03:39:32.359859498Z" level=error msg="Failed to destroy network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.368927 containerd[1508]: time="2025-04-30T03:39:32.366940111Z" level=error msg="encountered an error cleaning up failed sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.368927 containerd[1508]: time="2025-04-30T03:39:32.367073206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-djr25,Uid:740e49cc-cf63-4a6e-96f3-c534b1ee3390,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.373309 kubelet[2724]: E0430 03:39:32.369771 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.373532 kubelet[2724]: E0430 03:39:32.373509 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-djr25" Apr 30 03:39:32.373710 kubelet[2724]: E0430 03:39:32.373694 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-djr25" Apr 30 03:39:32.374289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d-shm.mount: Deactivated successfully. Apr 30 03:39:32.379244 kubelet[2724]: E0430 03:39:32.376872 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-djr25_calico-system(740e49cc-cf63-4a6e-96f3-c534b1ee3390)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-djr25_calico-system(740e49cc-cf63-4a6e-96f3-c534b1ee3390)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-djr25" podUID="740e49cc-cf63-4a6e-96f3-c534b1ee3390" Apr 30 03:39:32.526335 containerd[1508]: time="2025-04-30T03:39:32.526199797Z" level=error msg="Failed to destroy network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.526668 containerd[1508]: time="2025-04-30T03:39:32.526485825Z" level=error msg="encountered an error cleaning up failed sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.526668 containerd[1508]: time="2025-04-30T03:39:32.526535700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pwlb4,Uid:007e272b-6dc6-4988-a54b-c1109f76b258,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.526940 kubelet[2724]: E0430 03:39:32.526762 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.526940 kubelet[2724]: E0430 03:39:32.526885 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pwlb4" Apr 30 03:39:32.526940 kubelet[2724]: E0430 03:39:32.526907 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pwlb4" Apr 30 03:39:32.527042 kubelet[2724]: E0430 03:39:32.526951 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pwlb4_kube-system(007e272b-6dc6-4988-a54b-c1109f76b258)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pwlb4_kube-system(007e272b-6dc6-4988-a54b-c1109f76b258)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pwlb4" podUID="007e272b-6dc6-4988-a54b-c1109f76b258" Apr 30 03:39:32.531392 containerd[1508]: time="2025-04-30T03:39:32.531340872Z" level=error msg="Failed to destroy network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.531622 containerd[1508]: time="2025-04-30T03:39:32.531606058Z" level=error msg="encountered an error cleaning up failed sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.531671 containerd[1508]: time="2025-04-30T03:39:32.531643060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9gr6g,Uid:76c02922-bd45-4555-87c6-4dd092116a95,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.531927 kubelet[2724]: E0430 03:39:32.531878 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.532036 kubelet[2724]: E0430 03:39:32.531941 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9gr6g" Apr 30 03:39:32.532036 kubelet[2724]: E0430 03:39:32.531960 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9gr6g" Apr 30 03:39:32.532036 kubelet[2724]: E0430 03:39:32.532005 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9gr6g_kube-system(76c02922-bd45-4555-87c6-4dd092116a95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9gr6g_kube-system(76c02922-bd45-4555-87c6-4dd092116a95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9gr6g" podUID="76c02922-bd45-4555-87c6-4dd092116a95" Apr 30 03:39:32.539061 containerd[1508]: time="2025-04-30T03:39:32.538889660Z" level=error msg="Failed to destroy network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.539278 containerd[1508]: time="2025-04-30T03:39:32.539207808Z" level=error msg="encountered an error cleaning up failed sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.539278 containerd[1508]: time="2025-04-30T03:39:32.539258736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4fd78ccc-2pshn,Uid:5cbd4a0f-2d60-4cb4-95f0-be6674a91f19,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.539496 kubelet[2724]: E0430 03:39:32.539416 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.539496 kubelet[2724]: E0430 03:39:32.539465 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c4fd78ccc-2pshn" Apr 30 03:39:32.539496 kubelet[2724]: E0430 03:39:32.539481 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c4fd78ccc-2pshn" Apr 30 03:39:32.540312 kubelet[2724]: E0430 03:39:32.539517 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c4fd78ccc-2pshn_calico-system(5cbd4a0f-2d60-4cb4-95f0-be6674a91f19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c4fd78ccc-2pshn_calico-system(5cbd4a0f-2d60-4cb4-95f0-be6674a91f19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c4fd78ccc-2pshn" podUID="5cbd4a0f-2d60-4cb4-95f0-be6674a91f19" Apr 30 03:39:32.548223 containerd[1508]: time="2025-04-30T03:39:32.548104946Z" level=error msg="Failed to destroy network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.549783 containerd[1508]: time="2025-04-30T03:39:32.549759521Z" level=error msg="encountered an error cleaning up failed sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.550015 containerd[1508]: time="2025-04-30T03:39:32.549875272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7665d48578-kr6rd,Uid:919a621f-1bcc-46da-9865-aeb53a85cd70,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.551051 kubelet[2724]: E0430 03:39:32.551022 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.551181 kubelet[2724]: E0430 03:39:32.551139 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7665d48578-kr6rd" Apr 30 03:39:32.551181 kubelet[2724]: E0430 03:39:32.551158 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7665d48578-kr6rd" Apr 30 03:39:32.552085 kubelet[2724]: E0430 03:39:32.551264 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7665d48578-kr6rd_calico-apiserver(919a621f-1bcc-46da-9865-aeb53a85cd70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7665d48578-kr6rd_calico-apiserver(919a621f-1bcc-46da-9865-aeb53a85cd70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7665d48578-kr6rd" podUID="919a621f-1bcc-46da-9865-aeb53a85cd70" Apr 30 03:39:32.558801 containerd[1508]: time="2025-04-30T03:39:32.558741580Z" level=error msg="Failed to destroy network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.559114 containerd[1508]: time="2025-04-30T03:39:32.559084356Z" level=error msg="encountered an error cleaning up failed sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.559155 containerd[1508]: time="2025-04-30T03:39:32.559139622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7665d48578-fgcbp,Uid:da5d34c4-8951-46b2-a56c-5044ac1ce046,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.559437 kubelet[2724]: E0430 03:39:32.559397 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:32.559489 kubelet[2724]: E0430 03:39:32.559455 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7665d48578-fgcbp" Apr 30 03:39:32.559489 kubelet[2724]: E0430 03:39:32.559472 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7665d48578-fgcbp" Apr 30 03:39:32.559556 kubelet[2724]: E0430 03:39:32.559536 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7665d48578-fgcbp_calico-apiserver(da5d34c4-8951-46b2-a56c-5044ac1ce046)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7665d48578-fgcbp_calico-apiserver(da5d34c4-8951-46b2-a56c-5044ac1ce046)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7665d48578-fgcbp" podUID="da5d34c4-8951-46b2-a56c-5044ac1ce046" Apr 30 03:39:33.214210 kubelet[2724]: I0430 03:39:33.214148 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:39:33.238793 containerd[1508]: time="2025-04-30T03:39:33.238317759Z" level=info msg="StopPodSandbox for \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\"" Apr 30 03:39:33.242120 kubelet[2724]: I0430 03:39:33.241270 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:39:33.243461 containerd[1508]: time="2025-04-30T03:39:33.242633240Z" level=info msg="StopPodSandbox for \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\"" Apr 30 03:39:33.243461 containerd[1508]: time="2025-04-30T03:39:33.243032766Z" level=info msg="Ensure that sandbox c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d in task-service has been cleanup successfully" Apr 30 03:39:33.243461 containerd[1508]: time="2025-04-30T03:39:33.243211387Z" level=info msg="Ensure that sandbox ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb in task-service has been cleanup successfully" Apr 30 03:39:33.254526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440-shm.mount: Deactivated successfully. Apr 30 03:39:33.254879 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb-shm.mount: Deactivated successfully. Apr 30 03:39:33.255133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0-shm.mount: Deactivated successfully. Apr 30 03:39:33.263340 kubelet[2724]: I0430 03:39:33.263230 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:39:33.267021 containerd[1508]: time="2025-04-30T03:39:33.266440896Z" level=info msg="StopPodSandbox for \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\"" Apr 30 03:39:33.269066 containerd[1508]: time="2025-04-30T03:39:33.268905864Z" level=info msg="Ensure that sandbox c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0 in task-service has been cleanup successfully" Apr 30 03:39:33.274576 kubelet[2724]: I0430 03:39:33.273985 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:39:33.284510 containerd[1508]: time="2025-04-30T03:39:33.284460811Z" level=info msg="StopPodSandbox for \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\"" Apr 30 03:39:33.284950 containerd[1508]: time="2025-04-30T03:39:33.284919419Z" level=info msg="Ensure that sandbox 0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f in task-service has been cleanup successfully" Apr 30 03:39:33.300667 kubelet[2724]: I0430 03:39:33.300627 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:39:33.303836 containerd[1508]: time="2025-04-30T03:39:33.303785004Z" level=info msg="StopPodSandbox for \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\"" Apr 30 03:39:33.304184 containerd[1508]: time="2025-04-30T03:39:33.304067325Z" level=info msg="Ensure that sandbox 9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb in task-service has been cleanup successfully" Apr 30 03:39:33.307675 kubelet[2724]: I0430 03:39:33.306393 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:39:33.307746 containerd[1508]: time="2025-04-30T03:39:33.307147911Z" level=info msg="StopPodSandbox for \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\"" Apr 30 03:39:33.310028 containerd[1508]: time="2025-04-30T03:39:33.309985213Z" level=info msg="Ensure that sandbox b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440 in task-service has been cleanup successfully" Apr 30 03:39:33.371125 containerd[1508]: time="2025-04-30T03:39:33.371072663Z" level=error msg="StopPodSandbox for \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\" failed" error="failed to destroy network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:33.371530 kubelet[2724]: E0430 03:39:33.371501 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:39:33.371687 kubelet[2724]: E0430 03:39:33.371629 2724 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d"} Apr 30 03:39:33.371839 kubelet[2724]: E0430 03:39:33.371765 2724 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"740e49cc-cf63-4a6e-96f3-c534b1ee3390\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:33.371839 kubelet[2724]: E0430 03:39:33.371791 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"740e49cc-cf63-4a6e-96f3-c534b1ee3390\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-djr25" podUID="740e49cc-cf63-4a6e-96f3-c534b1ee3390" Apr 30 03:39:33.378149 containerd[1508]: time="2025-04-30T03:39:33.378107009Z" level=error msg="StopPodSandbox for \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\" failed" error="failed to destroy network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:33.378552 kubelet[2724]: E0430 03:39:33.378512 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:39:33.378658 kubelet[2724]: E0430 03:39:33.378642 2724 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0"} Apr 30 03:39:33.378788 kubelet[2724]: E0430 03:39:33.378729 2724 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"007e272b-6dc6-4988-a54b-c1109f76b258\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:33.378788 kubelet[2724]: E0430 03:39:33.378753 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"007e272b-6dc6-4988-a54b-c1109f76b258\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pwlb4" podUID="007e272b-6dc6-4988-a54b-c1109f76b258" Apr 30 03:39:33.381079 containerd[1508]: time="2025-04-30T03:39:33.381042940Z" level=error msg="StopPodSandbox for \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\" failed" error="failed to destroy network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:33.381271 kubelet[2724]: E0430 03:39:33.381251 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:39:33.381349 kubelet[2724]: E0430 03:39:33.381339 2724 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb"} Apr 30 03:39:33.381414 kubelet[2724]: E0430 03:39:33.381402 2724 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"919a621f-1bcc-46da-9865-aeb53a85cd70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:33.381950 kubelet[2724]: E0430 03:39:33.381503 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"919a621f-1bcc-46da-9865-aeb53a85cd70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7665d48578-kr6rd" podUID="919a621f-1bcc-46da-9865-aeb53a85cd70" Apr 30 03:39:33.390625 containerd[1508]: time="2025-04-30T03:39:33.390566711Z" level=error msg="StopPodSandbox for \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\" failed" error="failed to destroy network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:33.391137 kubelet[2724]: E0430 03:39:33.390940 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:39:33.391137 kubelet[2724]: E0430 03:39:33.391024 2724 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f"} Apr 30 03:39:33.391137 kubelet[2724]: E0430 03:39:33.391078 2724 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da5d34c4-8951-46b2-a56c-5044ac1ce046\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:33.391137 kubelet[2724]: E0430 03:39:33.391102 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da5d34c4-8951-46b2-a56c-5044ac1ce046\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7665d48578-fgcbp" podUID="da5d34c4-8951-46b2-a56c-5044ac1ce046" Apr 30 03:39:33.395757 containerd[1508]: time="2025-04-30T03:39:33.395715347Z" level=error msg="StopPodSandbox for \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\" failed" error="failed to destroy network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:33.395887 containerd[1508]: time="2025-04-30T03:39:33.395842640Z" level=error msg="StopPodSandbox for \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\" failed" error="failed to destroy network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:39:33.396165 kubelet[2724]: E0430 03:39:33.395990 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:39:33.396165 kubelet[2724]: E0430 03:39:33.396028 2724 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:39:33.396165 kubelet[2724]: E0430 03:39:33.396068 2724 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb"} Apr 30 03:39:33.396165 kubelet[2724]: E0430 03:39:33.396085 2724 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440"} Apr 30 03:39:33.396165 kubelet[2724]: E0430 03:39:33.396095 2724 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76c02922-bd45-4555-87c6-4dd092116a95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:33.396326 kubelet[2724]: E0430 03:39:33.396118 2724 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5cbd4a0f-2d60-4cb4-95f0-be6674a91f19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:39:33.396326 kubelet[2724]: E0430 03:39:33.396129 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76c02922-bd45-4555-87c6-4dd092116a95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9gr6g" podUID="76c02922-bd45-4555-87c6-4dd092116a95" Apr 30 03:39:33.396326 kubelet[2724]: E0430 03:39:33.396140 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5cbd4a0f-2d60-4cb4-95f0-be6674a91f19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c4fd78ccc-2pshn" podUID="5cbd4a0f-2d60-4cb4-95f0-be6674a91f19" Apr 30 03:39:33.870937 kubelet[2724]: I0430 03:39:33.870710 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:39:36.379706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840468253.mount: Deactivated successfully. Apr 30 03:39:36.479179 containerd[1508]: time="2025-04-30T03:39:36.473615602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:36.482208 containerd[1508]: time="2025-04-30T03:39:36.474669393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:39:36.491171 containerd[1508]: time="2025-04-30T03:39:36.490513625Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:36.493730 containerd[1508]: time="2025-04-30T03:39:36.493673458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:36.497963 containerd[1508]: time="2025-04-30T03:39:36.497904657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 4.282312726s" Apr 30 03:39:36.497963 containerd[1508]: time="2025-04-30T03:39:36.497948512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:39:36.614300 containerd[1508]: time="2025-04-30T03:39:36.614217692Z" level=info msg="CreateContainer within sandbox \"cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:39:36.862105 containerd[1508]: time="2025-04-30T03:39:36.862004620Z" level=info msg="CreateContainer within sandbox \"cfb8bd4c13f0038d392d02e3e5a142a984041c8bff8406f9f726ff3ac97bad78\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5965386147ed504c78fe191b242bc5380b8b03d163aaa8d16eba7da0270a6f93\"" Apr 30 03:39:36.877648 containerd[1508]: time="2025-04-30T03:39:36.876900663Z" level=info msg="StartContainer for \"5965386147ed504c78fe191b242bc5380b8b03d163aaa8d16eba7da0270a6f93\"" Apr 30 03:39:37.023983 systemd[1]: Started cri-containerd-5965386147ed504c78fe191b242bc5380b8b03d163aaa8d16eba7da0270a6f93.scope - libcontainer container 5965386147ed504c78fe191b242bc5380b8b03d163aaa8d16eba7da0270a6f93. Apr 30 03:39:37.064274 containerd[1508]: time="2025-04-30T03:39:37.063750764Z" level=info msg="StartContainer for \"5965386147ed504c78fe191b242bc5380b8b03d163aaa8d16eba7da0270a6f93\" returns successfully" Apr 30 03:39:37.165300 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:39:37.166548 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:39:38.359243 kubelet[2724]: I0430 03:39:38.359175 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:39:38.805867 kernel: bpftool[3966]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:39:39.094004 systemd-networkd[1406]: vxlan.calico: Link UP Apr 30 03:39:39.094014 systemd-networkd[1406]: vxlan.calico: Gained carrier Apr 30 03:39:40.238784 systemd-networkd[1406]: vxlan.calico: Gained IPv6LL Apr 30 03:39:47.069806 containerd[1508]: time="2025-04-30T03:39:47.069584089Z" level=info msg="StopPodSandbox for \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\"" Apr 30 03:39:47.071794 containerd[1508]: time="2025-04-30T03:39:47.070294312Z" level=info msg="StopPodSandbox for \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\"" Apr 30 03:39:47.076246 containerd[1508]: time="2025-04-30T03:39:47.075453500Z" level=info msg="StopPodSandbox for \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\"" Apr 30 03:39:47.233893 kubelet[2724]: I0430 03:39:47.230999 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5p2ph" podStartSLOduration=11.800624948 podStartE2EDuration="24.213607522s" podCreationTimestamp="2025-04-30 03:39:23 +0000 UTC" firstStartedPulling="2025-04-30 03:39:24.101574426 +0000 UTC m=+16.205935485" lastFinishedPulling="2025-04-30 03:39:36.514557001 +0000 UTC m=+28.618918059" observedRunningTime="2025-04-30 03:39:37.421973146 +0000 UTC m=+29.526334204" watchObservedRunningTime="2025-04-30 03:39:47.213607522 +0000 UTC m=+39.317968571" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.220 [INFO][4080] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.220 [INFO][4080] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" iface="eth0" netns="/var/run/netns/cni-2e2c0b25-c304-e30a-2c9a-42fab2d57ec4" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.221 [INFO][4080] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" iface="eth0" netns="/var/run/netns/cni-2e2c0b25-c304-e30a-2c9a-42fab2d57ec4" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.221 [INFO][4080] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" iface="eth0" netns="/var/run/netns/cni-2e2c0b25-c304-e30a-2c9a-42fab2d57ec4" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.221 [INFO][4080] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.221 [INFO][4080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.405 [INFO][4111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.406 [INFO][4111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.406 [INFO][4111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.416 [WARNING][4111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.416 [INFO][4111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.418 [INFO][4111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:47.424166 containerd[1508]: 2025-04-30 03:39:47.420 [INFO][4080] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:39:47.429304 containerd[1508]: time="2025-04-30T03:39:47.429254381Z" level=info msg="TearDown network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\" successfully" Apr 30 03:39:47.429553 containerd[1508]: time="2025-04-30T03:39:47.429480598Z" level=info msg="StopPodSandbox for \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\" returns successfully" Apr 30 03:39:47.431305 systemd[1]: run-netns-cni\x2d2e2c0b25\x2dc304\x2de30a\x2d2c9a\x2d42fab2d57ec4.mount: Deactivated successfully. Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.217 [INFO][4079] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.218 [INFO][4079] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" iface="eth0" netns="/var/run/netns/cni-8f57a017-ab27-47dc-3be4-c2e34536915c" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.218 [INFO][4079] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" iface="eth0" netns="/var/run/netns/cni-8f57a017-ab27-47dc-3be4-c2e34536915c" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.219 [INFO][4079] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" iface="eth0" netns="/var/run/netns/cni-8f57a017-ab27-47dc-3be4-c2e34536915c" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.219 [INFO][4079] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.219 [INFO][4079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.404 [INFO][4108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.406 [INFO][4108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.418 [INFO][4108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.432 [WARNING][4108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.432 [INFO][4108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.436 [INFO][4108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:47.452424 containerd[1508]: 2025-04-30 03:39:47.443 [INFO][4079] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:39:47.457237 containerd[1508]: time="2025-04-30T03:39:47.456666926Z" level=info msg="TearDown network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\" successfully" Apr 30 03:39:47.457237 containerd[1508]: time="2025-04-30T03:39:47.456690611Z" level=info msg="StopPodSandbox for \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\" returns successfully" Apr 30 03:39:47.457237 containerd[1508]: time="2025-04-30T03:39:47.456928411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7665d48578-fgcbp,Uid:da5d34c4-8951-46b2-a56c-5044ac1ce046,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:39:47.457056 systemd[1]: run-netns-cni\x2d8f57a017\x2dab27\x2d47dc\x2d3be4\x2dc2e34536915c.mount: Deactivated successfully. Apr 30 03:39:47.459110 containerd[1508]: time="2025-04-30T03:39:47.458726115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-djr25,Uid:740e49cc-cf63-4a6e-96f3-c534b1ee3390,Namespace:calico-system,Attempt:1,}" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.215 [INFO][4095] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.216 [INFO][4095] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" iface="eth0" netns="/var/run/netns/cni-9de4e15e-196d-43a3-8a54-5126e3dc66b7" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.217 [INFO][4095] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" iface="eth0" netns="/var/run/netns/cni-9de4e15e-196d-43a3-8a54-5126e3dc66b7" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.219 [INFO][4095] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" iface="eth0" netns="/var/run/netns/cni-9de4e15e-196d-43a3-8a54-5126e3dc66b7" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.219 [INFO][4095] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.219 [INFO][4095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.405 [INFO][4109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.407 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.436 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.446 [WARNING][4109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.447 [INFO][4109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.450 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:47.461478 containerd[1508]: 2025-04-30 03:39:47.456 [INFO][4095] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:39:47.462402 containerd[1508]: time="2025-04-30T03:39:47.461912111Z" level=info msg="TearDown network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\" successfully" Apr 30 03:39:47.462402 containerd[1508]: time="2025-04-30T03:39:47.461931127Z" level=info msg="StopPodSandbox for \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\" returns successfully" Apr 30 03:39:47.464910 containerd[1508]: time="2025-04-30T03:39:47.464582080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9gr6g,Uid:76c02922-bd45-4555-87c6-4dd092116a95,Namespace:kube-system,Attempt:1,}" Apr 30 03:39:47.465066 systemd[1]: run-netns-cni\x2d9de4e15e\x2d196d\x2d43a3\x2d8a54\x2d5126e3dc66b7.mount: Deactivated successfully. Apr 30 03:39:47.649273 systemd-networkd[1406]: cali72fda76899f: Link UP Apr 30 03:39:47.649441 systemd-networkd[1406]: cali72fda76899f: Gained carrier Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.557 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0 csi-node-driver- calico-system 740e49cc-cf63-4a6e-96f3-c534b1ee3390 734 0 2025-04-30 03:39:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-9-916214001e csi-node-driver-djr25 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali72fda76899f [] []}} ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Namespace="calico-system" Pod="csi-node-driver-djr25" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.557 [INFO][4137] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Namespace="calico-system" Pod="csi-node-driver-djr25" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.595 [INFO][4169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" HandleID="k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.607 [INFO][4169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" HandleID="k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-9-916214001e", "pod":"csi-node-driver-djr25", "timestamp":"2025-04-30 03:39:47.595919301 +0000 UTC"}, Hostname:"ci-4081-3-3-9-916214001e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.607 [INFO][4169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.608 [INFO][4169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.608 [INFO][4169] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-916214001e' Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.615 [INFO][4169] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.623 [INFO][4169] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.630 [INFO][4169] ipam/ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.632 [INFO][4169] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.634 [INFO][4169] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.634 [INFO][4169] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.635 [INFO][4169] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0 Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.639 [INFO][4169] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.643 [INFO][4169] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.65/26] block=192.168.19.64/26 handle="k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.643 [INFO][4169] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.65/26] handle="k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.644 [INFO][4169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:47.664681 containerd[1508]: 2025-04-30 03:39:47.644 [INFO][4169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.65/26] IPv6=[] ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" HandleID="k8s-pod-network.0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.667418 containerd[1508]: 2025-04-30 03:39:47.646 [INFO][4137] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Namespace="calico-system" Pod="csi-node-driver-djr25" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"740e49cc-cf63-4a6e-96f3-c534b1ee3390", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"", Pod:"csi-node-driver-djr25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali72fda76899f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:47.667418 containerd[1508]: 2025-04-30 03:39:47.646 [INFO][4137] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.65/32] ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Namespace="calico-system" Pod="csi-node-driver-djr25" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.667418 containerd[1508]: 2025-04-30 03:39:47.646 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72fda76899f ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Namespace="calico-system" Pod="csi-node-driver-djr25" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.667418 containerd[1508]: 2025-04-30 03:39:47.650 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Namespace="calico-system" Pod="csi-node-driver-djr25" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.667418 containerd[1508]: 2025-04-30 03:39:47.650 [INFO][4137] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Namespace="calico-system" Pod="csi-node-driver-djr25" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"740e49cc-cf63-4a6e-96f3-c534b1ee3390", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0", Pod:"csi-node-driver-djr25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali72fda76899f", MAC:"16:e4:bb:62:0b:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:47.667418 containerd[1508]: 2025-04-30 03:39:47.661 [INFO][4137] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0" Namespace="calico-system" Pod="csi-node-driver-djr25" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:39:47.735666 containerd[1508]: time="2025-04-30T03:39:47.734177836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:47.735666 containerd[1508]: time="2025-04-30T03:39:47.735057876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:47.735666 containerd[1508]: time="2025-04-30T03:39:47.735578201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:47.737308 containerd[1508]: time="2025-04-30T03:39:47.736970491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:47.760006 systemd-networkd[1406]: cali5f3b6903513: Link UP Apr 30 03:39:47.761375 systemd-networkd[1406]: cali5f3b6903513: Gained carrier Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.544 [INFO][4147] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0 coredns-6f6b679f8f- kube-system 76c02922-bd45-4555-87c6-4dd092116a95 733 0 2025-04-30 03:39:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-9-916214001e coredns-6f6b679f8f-9gr6g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f3b6903513 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Namespace="kube-system" Pod="coredns-6f6b679f8f-9gr6g" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.545 [INFO][4147] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Namespace="kube-system" Pod="coredns-6f6b679f8f-9gr6g" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.610 [INFO][4164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" HandleID="k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.623 [INFO][4164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" HandleID="k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051c80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-9-916214001e", "pod":"coredns-6f6b679f8f-9gr6g", "timestamp":"2025-04-30 03:39:47.610962958 +0000 UTC"}, Hostname:"ci-4081-3-3-9-916214001e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.623 [INFO][4164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.644 [INFO][4164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.644 [INFO][4164] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-916214001e' Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.716 [INFO][4164] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.727 [INFO][4164] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.734 [INFO][4164] ipam/ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.736 [INFO][4164] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.739 [INFO][4164] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.739 [INFO][4164] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.741 [INFO][4164] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103 Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.747 [INFO][4164] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.753 [INFO][4164] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.66/26] block=192.168.19.64/26 handle="k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.753 [INFO][4164] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.66/26] handle="k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.753 [INFO][4164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:47.777635 containerd[1508]: 2025-04-30 03:39:47.753 [INFO][4164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.66/26] IPv6=[] ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" HandleID="k8s-pod-network.011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.778436 containerd[1508]: 2025-04-30 03:39:47.756 [INFO][4147] cni-plugin/k8s.go 386: Populated endpoint ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Namespace="kube-system" Pod="coredns-6f6b679f8f-9gr6g" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"76c02922-bd45-4555-87c6-4dd092116a95", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"", Pod:"coredns-6f6b679f8f-9gr6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f3b6903513", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:47.778436 containerd[1508]: 2025-04-30 03:39:47.756 [INFO][4147] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.66/32] ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Namespace="kube-system" Pod="coredns-6f6b679f8f-9gr6g" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.778436 containerd[1508]: 2025-04-30 03:39:47.756 [INFO][4147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f3b6903513 ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Namespace="kube-system" Pod="coredns-6f6b679f8f-9gr6g" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.778436 containerd[1508]: 2025-04-30 03:39:47.762 [INFO][4147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Namespace="kube-system" Pod="coredns-6f6b679f8f-9gr6g" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.778436 containerd[1508]: 2025-04-30 03:39:47.762 [INFO][4147] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Namespace="kube-system" Pod="coredns-6f6b679f8f-9gr6g" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"76c02922-bd45-4555-87c6-4dd092116a95", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103", Pod:"coredns-6f6b679f8f-9gr6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f3b6903513", MAC:"da:8b:d2:96:1f:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:47.778436 containerd[1508]: 2025-04-30 03:39:47.773 [INFO][4147] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103" Namespace="kube-system" Pod="coredns-6f6b679f8f-9gr6g" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:39:47.801962 systemd[1]: Started cri-containerd-0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0.scope - libcontainer container 0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0. Apr 30 03:39:47.807891 containerd[1508]: time="2025-04-30T03:39:47.807233028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:47.808550 containerd[1508]: time="2025-04-30T03:39:47.808247819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:47.808550 containerd[1508]: time="2025-04-30T03:39:47.808368101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:47.809560 containerd[1508]: time="2025-04-30T03:39:47.808742064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:47.832989 systemd[1]: Started cri-containerd-011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103.scope - libcontainer container 011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103. Apr 30 03:39:47.878609 containerd[1508]: time="2025-04-30T03:39:47.878559020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-djr25,Uid:740e49cc-cf63-4a6e-96f3-c534b1ee3390,Namespace:calico-system,Attempt:1,} returns sandbox id \"0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0\"" Apr 30 03:39:47.888280 systemd-networkd[1406]: cali9f72881c5f9: Link UP Apr 30 03:39:47.890684 systemd-networkd[1406]: cali9f72881c5f9: Gained carrier Apr 30 03:39:47.903970 containerd[1508]: time="2025-04-30T03:39:47.903770572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:39:47.908864 containerd[1508]: time="2025-04-30T03:39:47.908719043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9gr6g,Uid:76c02922-bd45-4555-87c6-4dd092116a95,Namespace:kube-system,Attempt:1,} returns sandbox id \"011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103\"" Apr 30 03:39:47.916592 containerd[1508]: time="2025-04-30T03:39:47.916540174Z" level=info msg="CreateContainer within sandbox \"011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.559 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0 calico-apiserver-7665d48578- calico-apiserver da5d34c4-8951-46b2-a56c-5044ac1ce046 735 0 2025-04-30 03:39:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7665d48578 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-9-916214001e calico-apiserver-7665d48578-fgcbp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f72881c5f9 [] []}} ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-fgcbp" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.560 [INFO][4130] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-fgcbp" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.623 [INFO][4174] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" HandleID="k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.631 [INFO][4174] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" HandleID="k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed9a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-9-916214001e", "pod":"calico-apiserver-7665d48578-fgcbp", "timestamp":"2025-04-30 03:39:47.623150335 +0000 UTC"}, Hostname:"ci-4081-3-3-9-916214001e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.631 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.753 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.755 [INFO][4174] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-916214001e' Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.819 [INFO][4174] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.829 [INFO][4174] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.841 [INFO][4174] ipam/ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.850 [INFO][4174] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.853 [INFO][4174] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.853 [INFO][4174] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.857 [INFO][4174] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594 Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.862 [INFO][4174] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.873 [INFO][4174] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.67/26] block=192.168.19.64/26 handle="k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.874 [INFO][4174] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.67/26] handle="k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.874 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:47.920085 containerd[1508]: 2025-04-30 03:39:47.874 [INFO][4174] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.67/26] IPv6=[] ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" HandleID="k8s-pod-network.b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.920601 containerd[1508]: 2025-04-30 03:39:47.886 [INFO][4130] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-fgcbp" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0", GenerateName:"calico-apiserver-7665d48578-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d34c4-8951-46b2-a56c-5044ac1ce046", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7665d48578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"", Pod:"calico-apiserver-7665d48578-fgcbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f72881c5f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:47.920601 containerd[1508]: 2025-04-30 03:39:47.886 [INFO][4130] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.67/32] ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-fgcbp" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.920601 containerd[1508]: 2025-04-30 03:39:47.886 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f72881c5f9 ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-fgcbp" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.920601 containerd[1508]: 2025-04-30 03:39:47.888 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-fgcbp" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.920601 containerd[1508]: 2025-04-30 03:39:47.888 [INFO][4130] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-fgcbp" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0", GenerateName:"calico-apiserver-7665d48578-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d34c4-8951-46b2-a56c-5044ac1ce046", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7665d48578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594", Pod:"calico-apiserver-7665d48578-fgcbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f72881c5f9", MAC:"f2:09:9e:d9:11:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:47.920601 containerd[1508]: 2025-04-30 03:39:47.914 [INFO][4130] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-fgcbp" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:39:47.938082 containerd[1508]: time="2025-04-30T03:39:47.937824439Z" level=info msg="CreateContainer within sandbox \"011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edbaeec09399eda9ff09dca5e2d6a3835f4019ad8b0b897a9e5c883cec93a7b8\"" Apr 30 03:39:47.939371 containerd[1508]: time="2025-04-30T03:39:47.939243992Z" level=info msg="StartContainer for \"edbaeec09399eda9ff09dca5e2d6a3835f4019ad8b0b897a9e5c883cec93a7b8\"" Apr 30 03:39:47.942679 containerd[1508]: time="2025-04-30T03:39:47.942536152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:47.942679 containerd[1508]: time="2025-04-30T03:39:47.942606037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:47.942679 containerd[1508]: time="2025-04-30T03:39:47.942620936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:47.942870 containerd[1508]: time="2025-04-30T03:39:47.942718716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:47.962972 systemd[1]: Started cri-containerd-b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594.scope - libcontainer container b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594. Apr 30 03:39:47.968480 systemd[1]: Started cri-containerd-edbaeec09399eda9ff09dca5e2d6a3835f4019ad8b0b897a9e5c883cec93a7b8.scope - libcontainer container edbaeec09399eda9ff09dca5e2d6a3835f4019ad8b0b897a9e5c883cec93a7b8. Apr 30 03:39:47.999916 containerd[1508]: time="2025-04-30T03:39:47.999633051Z" level=info msg="StartContainer for \"edbaeec09399eda9ff09dca5e2d6a3835f4019ad8b0b897a9e5c883cec93a7b8\" returns successfully" Apr 30 03:39:48.020942 containerd[1508]: time="2025-04-30T03:39:48.020885995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7665d48578-fgcbp,Uid:da5d34c4-8951-46b2-a56c-5044ac1ce046,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594\"" Apr 30 03:39:48.080079 containerd[1508]: time="2025-04-30T03:39:48.079739257Z" level=info msg="StopPodSandbox for \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\"" Apr 30 03:39:48.080489 containerd[1508]: time="2025-04-30T03:39:48.080158627Z" level=info msg="StopPodSandbox for \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\"" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.131 [INFO][4406] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.131 [INFO][4406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" iface="eth0" netns="/var/run/netns/cni-74c37c1e-b9a0-596d-88ca-d02ab09d35f6" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.132 [INFO][4406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" iface="eth0" netns="/var/run/netns/cni-74c37c1e-b9a0-596d-88ca-d02ab09d35f6" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.132 [INFO][4406] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" iface="eth0" netns="/var/run/netns/cni-74c37c1e-b9a0-596d-88ca-d02ab09d35f6" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.132 [INFO][4406] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.132 [INFO][4406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.154 [INFO][4418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.154 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.154 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.163 [WARNING][4418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.163 [INFO][4418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.166 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:48.171164 containerd[1508]: 2025-04-30 03:39:48.167 [INFO][4406] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:39:48.171164 containerd[1508]: time="2025-04-30T03:39:48.171074339Z" level=info msg="TearDown network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\" successfully" Apr 30 03:39:48.171164 containerd[1508]: time="2025-04-30T03:39:48.171100028Z" level=info msg="StopPodSandbox for \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\" returns successfully" Apr 30 03:39:48.172318 containerd[1508]: time="2025-04-30T03:39:48.172057720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4fd78ccc-2pshn,Uid:5cbd4a0f-2d60-4cb4-95f0-be6674a91f19,Namespace:calico-system,Attempt:1,}" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.162 [INFO][4407] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.162 [INFO][4407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" iface="eth0" netns="/var/run/netns/cni-be2375a3-bfaa-f4e5-4597-57ee68f7dfdf" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.162 [INFO][4407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" iface="eth0" netns="/var/run/netns/cni-be2375a3-bfaa-f4e5-4597-57ee68f7dfdf" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.163 [INFO][4407] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" iface="eth0" netns="/var/run/netns/cni-be2375a3-bfaa-f4e5-4597-57ee68f7dfdf" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.163 [INFO][4407] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.163 [INFO][4407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.190 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.190 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.190 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.202 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.202 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.210 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:48.219699 containerd[1508]: 2025-04-30 03:39:48.215 [INFO][4407] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:39:48.219699 containerd[1508]: time="2025-04-30T03:39:48.219695340Z" level=info msg="TearDown network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\" successfully" Apr 30 03:39:48.220555 containerd[1508]: time="2025-04-30T03:39:48.219724436Z" level=info msg="StopPodSandbox for \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\" returns successfully" Apr 30 03:39:48.221079 containerd[1508]: time="2025-04-30T03:39:48.221051312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pwlb4,Uid:007e272b-6dc6-4988-a54b-c1109f76b258,Namespace:kube-system,Attempt:1,}" Apr 30 03:39:48.321125 systemd-networkd[1406]: cali1f6562eba1e: Link UP Apr 30 03:39:48.322722 systemd-networkd[1406]: cali1f6562eba1e: Gained carrier Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.237 [INFO][4434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0 calico-kube-controllers-5c4fd78ccc- calico-system 5cbd4a0f-2d60-4cb4-95f0-be6674a91f19 753 0 2025-04-30 03:39:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c4fd78ccc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-9-916214001e calico-kube-controllers-5c4fd78ccc-2pshn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f6562eba1e [] []}} ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Namespace="calico-system" Pod="calico-kube-controllers-5c4fd78ccc-2pshn" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.237 [INFO][4434] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Namespace="calico-system" Pod="calico-kube-controllers-5c4fd78ccc-2pshn" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.279 [INFO][4458] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" HandleID="k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.290 [INFO][4458] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" HandleID="k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-9-916214001e", "pod":"calico-kube-controllers-5c4fd78ccc-2pshn", "timestamp":"2025-04-30 03:39:48.27953591 +0000 UTC"}, Hostname:"ci-4081-3-3-9-916214001e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.290 [INFO][4458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.290 [INFO][4458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.290 [INFO][4458] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-916214001e' Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.292 [INFO][4458] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.296 [INFO][4458] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.300 [INFO][4458] ipam/ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.302 [INFO][4458] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.304 [INFO][4458] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.305 [INFO][4458] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.306 [INFO][4458] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322 Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.310 [INFO][4458] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.315 [INFO][4458] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.68/26] block=192.168.19.64/26 handle="k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.316 [INFO][4458] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.68/26] handle="k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.316 [INFO][4458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:48.334901 containerd[1508]: 2025-04-30 03:39:48.316 [INFO][4458] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.68/26] IPv6=[] ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" HandleID="k8s-pod-network.f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.335891 containerd[1508]: 2025-04-30 03:39:48.317 [INFO][4434] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Namespace="calico-system" Pod="calico-kube-controllers-5c4fd78ccc-2pshn" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0", GenerateName:"calico-kube-controllers-5c4fd78ccc-", Namespace:"calico-system", SelfLink:"", UID:"5cbd4a0f-2d60-4cb4-95f0-be6674a91f19", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4fd78ccc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"", Pod:"calico-kube-controllers-5c4fd78ccc-2pshn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f6562eba1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:48.335891 containerd[1508]: 2025-04-30 03:39:48.317 [INFO][4434] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.68/32] ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Namespace="calico-system" Pod="calico-kube-controllers-5c4fd78ccc-2pshn" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.335891 containerd[1508]: 2025-04-30 03:39:48.318 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f6562eba1e ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Namespace="calico-system" Pod="calico-kube-controllers-5c4fd78ccc-2pshn" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.335891 containerd[1508]: 2025-04-30 03:39:48.321 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Namespace="calico-system" Pod="calico-kube-controllers-5c4fd78ccc-2pshn" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.335891 containerd[1508]: 2025-04-30 03:39:48.322 [INFO][4434] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Namespace="calico-system" Pod="calico-kube-controllers-5c4fd78ccc-2pshn" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0", GenerateName:"calico-kube-controllers-5c4fd78ccc-", Namespace:"calico-system", SelfLink:"", UID:"5cbd4a0f-2d60-4cb4-95f0-be6674a91f19", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4fd78ccc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322", Pod:"calico-kube-controllers-5c4fd78ccc-2pshn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f6562eba1e", MAC:"5e:82:34:c8:a8:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:48.335891 containerd[1508]: 2025-04-30 03:39:48.332 [INFO][4434] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322" Namespace="calico-system" Pod="calico-kube-controllers-5c4fd78ccc-2pshn" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:39:48.368107 containerd[1508]: time="2025-04-30T03:39:48.365129113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:48.368107 containerd[1508]: time="2025-04-30T03:39:48.365977453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:48.368107 containerd[1508]: time="2025-04-30T03:39:48.365989877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:48.368107 containerd[1508]: time="2025-04-30T03:39:48.366130378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:48.382946 systemd[1]: Started cri-containerd-f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322.scope - libcontainer container f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322. Apr 30 03:39:48.450110 systemd[1]: run-netns-cni\x2d74c37c1e\x2db9a0\x2d596d\x2d88ca\x2dd02ab09d35f6.mount: Deactivated successfully. Apr 30 03:39:48.450197 systemd[1]: run-netns-cni\x2dbe2375a3\x2dbfaa\x2df4e5\x2d4597\x2d57ee68f7dfdf.mount: Deactivated successfully. Apr 30 03:39:48.461391 kubelet[2724]: I0430 03:39:48.461319 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9gr6g" podStartSLOduration=35.461299012 podStartE2EDuration="35.461299012s" podCreationTimestamp="2025-04-30 03:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:48.4200903 +0000 UTC m=+40.524451359" watchObservedRunningTime="2025-04-30 03:39:48.461299012 +0000 UTC m=+40.565660071" Apr 30 03:39:48.487167 systemd-networkd[1406]: cali0d39d3d1167: Link UP Apr 30 03:39:48.487862 systemd-networkd[1406]: cali0d39d3d1167: Gained carrier Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.278 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0 coredns-6f6b679f8f- kube-system 007e272b-6dc6-4988-a54b-c1109f76b258 754 0 2025-04-30 03:39:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-9-916214001e coredns-6f6b679f8f-pwlb4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0d39d3d1167 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwlb4" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.278 [INFO][4447] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwlb4" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.308 [INFO][4467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" HandleID="k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.394 [INFO][4467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" HandleID="k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003350d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-9-916214001e", "pod":"coredns-6f6b679f8f-pwlb4", "timestamp":"2025-04-30 03:39:48.308837053 +0000 UTC"}, Hostname:"ci-4081-3-3-9-916214001e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.394 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.394 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.395 [INFO][4467] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-916214001e' Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.398 [INFO][4467] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.406 [INFO][4467] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.415 [INFO][4467] ipam/ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.423 [INFO][4467] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.453 [INFO][4467] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.453 [INFO][4467] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.462 [INFO][4467] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.470 [INFO][4467] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.480 [INFO][4467] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.69/26] block=192.168.19.64/26 handle="k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.480 [INFO][4467] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.69/26] handle="k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.480 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:48.504547 containerd[1508]: 2025-04-30 03:39:48.480 [INFO][4467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.69/26] IPv6=[] ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" HandleID="k8s-pod-network.1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.505164 containerd[1508]: 2025-04-30 03:39:48.483 [INFO][4447] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwlb4" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"007e272b-6dc6-4988-a54b-c1109f76b258", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"", Pod:"coredns-6f6b679f8f-pwlb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d39d3d1167", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:48.505164 containerd[1508]: 2025-04-30 03:39:48.483 [INFO][4447] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.69/32] ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwlb4" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.505164 containerd[1508]: 2025-04-30 03:39:48.483 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d39d3d1167 ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwlb4" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.505164 containerd[1508]: 2025-04-30 03:39:48.486 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwlb4" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.505164 containerd[1508]: 2025-04-30 03:39:48.488 [INFO][4447] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwlb4" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"007e272b-6dc6-4988-a54b-c1109f76b258", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb", Pod:"coredns-6f6b679f8f-pwlb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d39d3d1167", MAC:"ba:14:4b:eb:a7:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:48.505164 containerd[1508]: 2025-04-30 03:39:48.499 [INFO][4447] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwlb4" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:39:48.534761 containerd[1508]: time="2025-04-30T03:39:48.534389506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4fd78ccc-2pshn,Uid:5cbd4a0f-2d60-4cb4-95f0-be6674a91f19,Namespace:calico-system,Attempt:1,} returns sandbox id \"f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322\"" Apr 30 03:39:48.541406 containerd[1508]: time="2025-04-30T03:39:48.541107231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:48.541598 containerd[1508]: time="2025-04-30T03:39:48.541216253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:48.541598 containerd[1508]: time="2025-04-30T03:39:48.541233266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:48.541598 containerd[1508]: time="2025-04-30T03:39:48.541364900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:48.566995 systemd[1]: Started cri-containerd-1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb.scope - libcontainer container 1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb. Apr 30 03:39:48.603644 containerd[1508]: time="2025-04-30T03:39:48.603295039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pwlb4,Uid:007e272b-6dc6-4988-a54b-c1109f76b258,Namespace:kube-system,Attempt:1,} returns sandbox id \"1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb\"" Apr 30 03:39:48.606874 containerd[1508]: time="2025-04-30T03:39:48.606839261Z" level=info msg="CreateContainer within sandbox \"1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:39:48.623429 containerd[1508]: time="2025-04-30T03:39:48.623372812Z" level=info msg="CreateContainer within sandbox \"1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed0010f37994c1ec3e228f28b388fdef5643004b7d092934c12c6925ef0d2bc5\"" Apr 30 03:39:48.624107 containerd[1508]: time="2025-04-30T03:39:48.624070441Z" level=info msg="StartContainer for \"ed0010f37994c1ec3e228f28b388fdef5643004b7d092934c12c6925ef0d2bc5\"" Apr 30 03:39:48.651995 systemd[1]: Started cri-containerd-ed0010f37994c1ec3e228f28b388fdef5643004b7d092934c12c6925ef0d2bc5.scope - libcontainer container ed0010f37994c1ec3e228f28b388fdef5643004b7d092934c12c6925ef0d2bc5. Apr 30 03:39:48.679723 containerd[1508]: time="2025-04-30T03:39:48.679660613Z" level=info msg="StartContainer for \"ed0010f37994c1ec3e228f28b388fdef5643004b7d092934c12c6925ef0d2bc5\" returns successfully" Apr 30 03:39:48.750021 systemd-networkd[1406]: cali72fda76899f: Gained IPv6LL Apr 30 03:39:49.006626 systemd-networkd[1406]: cali5f3b6903513: Gained IPv6LL Apr 30 03:39:49.071712 containerd[1508]: time="2025-04-30T03:39:49.069758389Z" level=info msg="StopPodSandbox for \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\"" Apr 30 03:39:49.134324 systemd-networkd[1406]: cali9f72881c5f9: Gained IPv6LL Apr 30 03:39:49.172859 kubelet[2724]: I0430 03:39:49.171983 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.155 [INFO][4639] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.156 [INFO][4639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" iface="eth0" netns="/var/run/netns/cni-54ea329e-0537-8a22-097f-270a738b83d5" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.157 [INFO][4639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" iface="eth0" netns="/var/run/netns/cni-54ea329e-0537-8a22-097f-270a738b83d5" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.160 [INFO][4639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" iface="eth0" netns="/var/run/netns/cni-54ea329e-0537-8a22-097f-270a738b83d5" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.160 [INFO][4639] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.160 [INFO][4639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.210 [INFO][4646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.210 [INFO][4646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.211 [INFO][4646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.221 [WARNING][4646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.221 [INFO][4646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.225 [INFO][4646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:49.236442 containerd[1508]: 2025-04-30 03:39:49.231 [INFO][4639] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:39:49.237406 containerd[1508]: time="2025-04-30T03:39:49.236612712Z" level=info msg="TearDown network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\" successfully" Apr 30 03:39:49.237406 containerd[1508]: time="2025-04-30T03:39:49.236648992Z" level=info msg="StopPodSandbox for \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\" returns successfully" Apr 30 03:39:49.237525 containerd[1508]: time="2025-04-30T03:39:49.237480450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7665d48578-kr6rd,Uid:919a621f-1bcc-46da-9865-aeb53a85cd70,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:39:49.390030 systemd-networkd[1406]: cali1f6562eba1e: Gained IPv6LL Apr 30 03:39:49.409800 systemd-networkd[1406]: caliedd2efb4b80: Link UP Apr 30 03:39:49.410522 systemd-networkd[1406]: caliedd2efb4b80: Gained carrier Apr 30 03:39:49.432479 systemd[1]: run-netns-cni\x2d54ea329e\x2d0537\x2d8a22\x2d097f\x2d270a738b83d5.mount: Deactivated successfully. Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.292 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0 calico-apiserver-7665d48578- calico-apiserver 919a621f-1bcc-46da-9865-aeb53a85cd70 777 0 2025-04-30 03:39:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7665d48578 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-9-916214001e calico-apiserver-7665d48578-kr6rd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliedd2efb4b80 [] []}} ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-kr6rd" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.292 [INFO][4668] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-kr6rd" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.337 [INFO][4688] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" HandleID="k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.354 [INFO][4688] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" HandleID="k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d3e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-9-916214001e", "pod":"calico-apiserver-7665d48578-kr6rd", "timestamp":"2025-04-30 03:39:49.337858431 +0000 UTC"}, Hostname:"ci-4081-3-3-9-916214001e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.355 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.355 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.355 [INFO][4688] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-9-916214001e' Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.358 [INFO][4688] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.363 [INFO][4688] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.374 [INFO][4688] ipam/ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.381 [INFO][4688] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.384 [INFO][4688] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.384 [INFO][4688] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.386 [INFO][4688] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.391 [INFO][4688] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.402 [INFO][4688] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.70/26] block=192.168.19.64/26 handle="k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.402 [INFO][4688] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.70/26] handle="k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" host="ci-4081-3-3-9-916214001e" Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.402 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:39:49.443876 containerd[1508]: 2025-04-30 03:39:49.402 [INFO][4688] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.70/26] IPv6=[] ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" HandleID="k8s-pod-network.1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.446347 containerd[1508]: 2025-04-30 03:39:49.405 [INFO][4668] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-kr6rd" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0", GenerateName:"calico-apiserver-7665d48578-", Namespace:"calico-apiserver", SelfLink:"", UID:"919a621f-1bcc-46da-9865-aeb53a85cd70", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7665d48578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"", Pod:"calico-apiserver-7665d48578-kr6rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd2efb4b80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:49.446347 containerd[1508]: 2025-04-30 03:39:49.405 [INFO][4668] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.70/32] ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-kr6rd" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.446347 containerd[1508]: 2025-04-30 03:39:49.405 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedd2efb4b80 ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-kr6rd" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.446347 containerd[1508]: 2025-04-30 03:39:49.410 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-kr6rd" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.446347 containerd[1508]: 2025-04-30 03:39:49.410 [INFO][4668] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-kr6rd" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0", GenerateName:"calico-apiserver-7665d48578-", Namespace:"calico-apiserver", SelfLink:"", UID:"919a621f-1bcc-46da-9865-aeb53a85cd70", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7665d48578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae", Pod:"calico-apiserver-7665d48578-kr6rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd2efb4b80", MAC:"d2:f7:37:35:f3:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:39:49.446347 containerd[1508]: 2025-04-30 03:39:49.433 [INFO][4668] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae" Namespace="calico-apiserver" Pod="calico-apiserver-7665d48578-kr6rd" WorkloadEndpoint="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:39:49.523321 containerd[1508]: time="2025-04-30T03:39:49.522963402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:39:49.523321 containerd[1508]: time="2025-04-30T03:39:49.523078054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:39:49.523321 containerd[1508]: time="2025-04-30T03:39:49.523128612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:49.524620 containerd[1508]: time="2025-04-30T03:39:49.523713182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:39:49.536398 kubelet[2724]: I0430 03:39:49.533291 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pwlb4" podStartSLOduration=36.533272651 podStartE2EDuration="36.533272651s" podCreationTimestamp="2025-04-30 03:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:39:49.480342249 +0000 UTC m=+41.584703307" watchObservedRunningTime="2025-04-30 03:39:49.533272651 +0000 UTC m=+41.637633710" Apr 30 03:39:49.565428 systemd[1]: Started cri-containerd-1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae.scope - libcontainer container 1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae. Apr 30 03:39:49.631254 containerd[1508]: time="2025-04-30T03:39:49.631139826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7665d48578-kr6rd,Uid:919a621f-1bcc-46da-9865-aeb53a85cd70,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae\"" Apr 30 03:39:49.665565 containerd[1508]: time="2025-04-30T03:39:49.665376433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:49.667200 containerd[1508]: time="2025-04-30T03:39:49.667122260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:39:49.668466 containerd[1508]: time="2025-04-30T03:39:49.668374722Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:49.670618 containerd[1508]: time="2025-04-30T03:39:49.670562724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:49.671223 containerd[1508]: time="2025-04-30T03:39:49.671034415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.766839123s" Apr 30 03:39:49.671223 containerd[1508]: time="2025-04-30T03:39:49.671063632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:39:49.673655 containerd[1508]: time="2025-04-30T03:39:49.673621018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:39:49.675536 containerd[1508]: time="2025-04-30T03:39:49.675155377Z" level=info msg="CreateContainer within sandbox \"0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:39:49.692713 containerd[1508]: time="2025-04-30T03:39:49.692656345Z" level=info msg="CreateContainer within sandbox \"0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bc8d47e41b44cc5c95bfd035934aa5906ff6e58d1dc49b3e4b5b202ecc3ff2a8\"" Apr 30 03:39:49.694032 containerd[1508]: time="2025-04-30T03:39:49.693384042Z" level=info msg="StartContainer for \"bc8d47e41b44cc5c95bfd035934aa5906ff6e58d1dc49b3e4b5b202ecc3ff2a8\"" Apr 30 03:39:49.730937 systemd[1]: Started cri-containerd-bc8d47e41b44cc5c95bfd035934aa5906ff6e58d1dc49b3e4b5b202ecc3ff2a8.scope - libcontainer container bc8d47e41b44cc5c95bfd035934aa5906ff6e58d1dc49b3e4b5b202ecc3ff2a8. Apr 30 03:39:49.767671 containerd[1508]: time="2025-04-30T03:39:49.767582850Z" level=info msg="StartContainer for \"bc8d47e41b44cc5c95bfd035934aa5906ff6e58d1dc49b3e4b5b202ecc3ff2a8\" returns successfully" Apr 30 03:39:50.221972 systemd-networkd[1406]: cali0d39d3d1167: Gained IPv6LL Apr 30 03:39:50.990027 systemd-networkd[1406]: caliedd2efb4b80: Gained IPv6LL Apr 30 03:39:51.785694 containerd[1508]: time="2025-04-30T03:39:51.785640826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:51.786865 containerd[1508]: time="2025-04-30T03:39:51.786834416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:39:51.787717 containerd[1508]: time="2025-04-30T03:39:51.787680645Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:51.789554 containerd[1508]: time="2025-04-30T03:39:51.789511219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:51.790404 containerd[1508]: time="2025-04-30T03:39:51.790105780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.116455966s" Apr 30 03:39:51.790404 containerd[1508]: time="2025-04-30T03:39:51.790131149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:39:51.791301 containerd[1508]: time="2025-04-30T03:39:51.791284151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:39:51.793474 containerd[1508]: time="2025-04-30T03:39:51.793437089Z" level=info msg="CreateContainer within sandbox \"b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:39:51.808696 containerd[1508]: time="2025-04-30T03:39:51.808645993Z" level=info msg="CreateContainer within sandbox \"b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bd79f0791fcf2a69de473b8085d29ee4058e937ce054d81393681b46acca07d5\"" Apr 30 03:39:51.814879 containerd[1508]: time="2025-04-30T03:39:51.814819564Z" level=info msg="StartContainer for \"bd79f0791fcf2a69de473b8085d29ee4058e937ce054d81393681b46acca07d5\"" Apr 30 03:39:51.870010 systemd[1]: Started cri-containerd-bd79f0791fcf2a69de473b8085d29ee4058e937ce054d81393681b46acca07d5.scope - libcontainer container bd79f0791fcf2a69de473b8085d29ee4058e937ce054d81393681b46acca07d5. Apr 30 03:39:51.908951 containerd[1508]: time="2025-04-30T03:39:51.908909847Z" level=info msg="StartContainer for \"bd79f0791fcf2a69de473b8085d29ee4058e937ce054d81393681b46acca07d5\" returns successfully" Apr 30 03:39:53.480063 kubelet[2724]: I0430 03:39:53.479721 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:39:53.809921 containerd[1508]: time="2025-04-30T03:39:53.809576368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:53.811470 containerd[1508]: time="2025-04-30T03:39:53.810898890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:39:53.811749 containerd[1508]: time="2025-04-30T03:39:53.811633574Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:53.814957 containerd[1508]: time="2025-04-30T03:39:53.813542874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:53.824380 containerd[1508]: time="2025-04-30T03:39:53.824337691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.032942224s" Apr 30 03:39:53.824564 containerd[1508]: time="2025-04-30T03:39:53.824547958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:39:53.825943 containerd[1508]: time="2025-04-30T03:39:53.825929074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:39:53.841927 containerd[1508]: time="2025-04-30T03:39:53.841806900Z" level=info msg="CreateContainer within sandbox \"f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:39:53.859916 containerd[1508]: time="2025-04-30T03:39:53.859867646Z" level=info msg="CreateContainer within sandbox \"f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"104ea7c262acc49ea14c22c9f38ae6e8ab88011685ea8b3ef321d601b27517f5\"" Apr 30 03:39:53.860872 containerd[1508]: time="2025-04-30T03:39:53.860840351Z" level=info msg="StartContainer for \"104ea7c262acc49ea14c22c9f38ae6e8ab88011685ea8b3ef321d601b27517f5\"" Apr 30 03:39:53.912947 systemd[1]: Started cri-containerd-104ea7c262acc49ea14c22c9f38ae6e8ab88011685ea8b3ef321d601b27517f5.scope - libcontainer container 104ea7c262acc49ea14c22c9f38ae6e8ab88011685ea8b3ef321d601b27517f5. Apr 30 03:39:53.955238 containerd[1508]: time="2025-04-30T03:39:53.955028706Z" level=info msg="StartContainer for \"104ea7c262acc49ea14c22c9f38ae6e8ab88011685ea8b3ef321d601b27517f5\" returns successfully" Apr 30 03:39:54.310953 containerd[1508]: time="2025-04-30T03:39:54.310893888Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:54.312848 containerd[1508]: time="2025-04-30T03:39:54.312764694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:39:54.313973 containerd[1508]: time="2025-04-30T03:39:54.313942287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 487.738851ms" Apr 30 03:39:54.313973 containerd[1508]: time="2025-04-30T03:39:54.313975180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:39:54.324845 containerd[1508]: time="2025-04-30T03:39:54.324588364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:39:54.327768 containerd[1508]: time="2025-04-30T03:39:54.327631884Z" level=info msg="CreateContainer within sandbox \"1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:39:54.352869 containerd[1508]: time="2025-04-30T03:39:54.352707753Z" level=info msg="CreateContainer within sandbox \"1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ee02377ae553808dd23150bed87ab568364e2be66d1d390263e09be7367e2013\"" Apr 30 03:39:54.355932 containerd[1508]: time="2025-04-30T03:39:54.354977162Z" level=info msg="StartContainer for \"ee02377ae553808dd23150bed87ab568364e2be66d1d390263e09be7367e2013\"" Apr 30 03:39:54.392721 systemd[1]: Started cri-containerd-ee02377ae553808dd23150bed87ab568364e2be66d1d390263e09be7367e2013.scope - libcontainer container ee02377ae553808dd23150bed87ab568364e2be66d1d390263e09be7367e2013. Apr 30 03:39:54.445977 containerd[1508]: time="2025-04-30T03:39:54.445936436Z" level=info msg="StartContainer for \"ee02377ae553808dd23150bed87ab568364e2be66d1d390263e09be7367e2013\" returns successfully" Apr 30 03:39:54.501504 kubelet[2724]: I0430 03:39:54.501429 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7665d48578-fgcbp" podStartSLOduration=27.732929843 podStartE2EDuration="31.501405582s" podCreationTimestamp="2025-04-30 03:39:23 +0000 UTC" firstStartedPulling="2025-04-30 03:39:48.022345566 +0000 UTC m=+40.126706625" lastFinishedPulling="2025-04-30 03:39:51.790821305 +0000 UTC m=+43.895182364" observedRunningTime="2025-04-30 03:39:52.498342878 +0000 UTC m=+44.602703977" watchObservedRunningTime="2025-04-30 03:39:54.501405582 +0000 UTC m=+46.605766641" Apr 30 03:39:54.524696 kubelet[2724]: I0430 03:39:54.524618 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c4fd78ccc-2pshn" podStartSLOduration=26.234669145 podStartE2EDuration="31.524593611s" podCreationTimestamp="2025-04-30 03:39:23 +0000 UTC" firstStartedPulling="2025-04-30 03:39:48.535743193 +0000 UTC m=+40.640104251" lastFinishedPulling="2025-04-30 03:39:53.825667658 +0000 UTC m=+45.930028717" observedRunningTime="2025-04-30 03:39:54.523863927 +0000 UTC m=+46.628224996" watchObservedRunningTime="2025-04-30 03:39:54.524593611 +0000 UTC m=+46.628954670" Apr 30 03:39:54.525345 kubelet[2724]: I0430 03:39:54.524963 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7665d48578-kr6rd" podStartSLOduration=26.834237946000002 podStartE2EDuration="31.524954491s" podCreationTimestamp="2025-04-30 03:39:23 +0000 UTC" firstStartedPulling="2025-04-30 03:39:49.633597319 +0000 UTC m=+41.737958377" lastFinishedPulling="2025-04-30 03:39:54.324313862 +0000 UTC m=+46.428674922" observedRunningTime="2025-04-30 03:39:54.503589716 +0000 UTC m=+46.607950765" watchObservedRunningTime="2025-04-30 03:39:54.524954491 +0000 UTC m=+46.629315550" Apr 30 03:39:55.490545 kubelet[2724]: I0430 03:39:55.490151 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:39:56.047898 containerd[1508]: time="2025-04-30T03:39:56.047808501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:56.049208 containerd[1508]: time="2025-04-30T03:39:56.049127640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:39:56.050442 containerd[1508]: time="2025-04-30T03:39:56.050397595Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:56.053868 containerd[1508]: time="2025-04-30T03:39:56.053839812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.729218734s" Apr 30 03:39:56.054066 containerd[1508]: time="2025-04-30T03:39:56.053948003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:39:56.057210 containerd[1508]: time="2025-04-30T03:39:56.057091060Z" level=info msg="CreateContainer within sandbox \"0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:39:56.057524 containerd[1508]: time="2025-04-30T03:39:56.057294515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:39:56.083207 containerd[1508]: time="2025-04-30T03:39:56.083029502Z" level=info msg="CreateContainer within sandbox \"0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"981ca8fd06861604df494e7ff82cc93720c7fcd6d0f28b5f5211571e290ddc0e\"" Apr 30 03:39:56.085571 containerd[1508]: time="2025-04-30T03:39:56.085537427Z" level=info msg="StartContainer for \"981ca8fd06861604df494e7ff82cc93720c7fcd6d0f28b5f5211571e290ddc0e\"" Apr 30 03:39:56.118951 systemd[1]: Started cri-containerd-981ca8fd06861604df494e7ff82cc93720c7fcd6d0f28b5f5211571e290ddc0e.scope - libcontainer container 981ca8fd06861604df494e7ff82cc93720c7fcd6d0f28b5f5211571e290ddc0e. Apr 30 03:39:56.147708 containerd[1508]: time="2025-04-30T03:39:56.147589792Z" level=info msg="StartContainer for \"981ca8fd06861604df494e7ff82cc93720c7fcd6d0f28b5f5211571e290ddc0e\" returns successfully" Apr 30 03:39:56.449609 kubelet[2724]: I0430 03:39:56.449537 2724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:39:56.455234 kubelet[2724]: I0430 03:39:56.455194 2724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:40:05.610236 kubelet[2724]: I0430 03:40:05.610104 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:40:05.670932 kubelet[2724]: I0430 03:40:05.669871 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-djr25" podStartSLOduration=34.504751851 podStartE2EDuration="42.669809469s" podCreationTimestamp="2025-04-30 03:39:23 +0000 UTC" firstStartedPulling="2025-04-30 03:39:47.889857069 +0000 UTC m=+39.994218128" lastFinishedPulling="2025-04-30 03:39:56.054914687 +0000 UTC m=+48.159275746" observedRunningTime="2025-04-30 03:39:56.515038884 +0000 UTC m=+48.619399943" watchObservedRunningTime="2025-04-30 03:40:05.669809469 +0000 UTC m=+57.774170558" Apr 30 03:40:08.229451 containerd[1508]: time="2025-04-30T03:40:08.229027877Z" level=info msg="StopPodSandbox for \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\"" Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.383 [WARNING][5070] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"740e49cc-cf63-4a6e-96f3-c534b1ee3390", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0", Pod:"csi-node-driver-djr25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali72fda76899f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.385 [INFO][5070] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.385 [INFO][5070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" iface="eth0" netns="" Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.385 [INFO][5070] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.385 [INFO][5070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.406 [INFO][5080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.406 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.406 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.414 [WARNING][5080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.414 [INFO][5080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.416 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:08.421750 containerd[1508]: 2025-04-30 03:40:08.419 [INFO][5070] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:40:08.424237 containerd[1508]: time="2025-04-30T03:40:08.421795719Z" level=info msg="TearDown network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\" successfully" Apr 30 03:40:08.424237 containerd[1508]: time="2025-04-30T03:40:08.421875144Z" level=info msg="StopPodSandbox for \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\" returns successfully" Apr 30 03:40:08.438199 containerd[1508]: time="2025-04-30T03:40:08.438126326Z" level=info msg="RemovePodSandbox for \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\"" Apr 30 03:40:08.438370 containerd[1508]: time="2025-04-30T03:40:08.438222373Z" level=info msg="Forcibly stopping sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\"" Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.480 [WARNING][5099] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"740e49cc-cf63-4a6e-96f3-c534b1ee3390", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"0cf74eba207bb7f67e03b115d80b395c2d432903b031df076ee6c37550e3a2b0", Pod:"csi-node-driver-djr25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali72fda76899f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.482 [INFO][5099] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.482 [INFO][5099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" iface="eth0" netns="" Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.482 [INFO][5099] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.482 [INFO][5099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.500 [INFO][5106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.500 [INFO][5106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.500 [INFO][5106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.505 [WARNING][5106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.505 [INFO][5106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" HandleID="k8s-pod-network.c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Workload="ci--4081--3--3--9--916214001e-k8s-csi--node--driver--djr25-eth0" Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.507 [INFO][5106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:08.511388 containerd[1508]: 2025-04-30 03:40:08.508 [INFO][5099] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d" Apr 30 03:40:08.511388 containerd[1508]: time="2025-04-30T03:40:08.511292648Z" level=info msg="TearDown network for sandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\" successfully" Apr 30 03:40:08.541927 containerd[1508]: time="2025-04-30T03:40:08.541859480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:40:08.553662 containerd[1508]: time="2025-04-30T03:40:08.553593354Z" level=info msg="RemovePodSandbox \"c020003329ec2b55506c57167b2bf0cc820f405a744beaea797c54fc981a759d\" returns successfully" Apr 30 03:40:08.554336 containerd[1508]: time="2025-04-30T03:40:08.554304588Z" level=info msg="StopPodSandbox for \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\"" Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.607 [WARNING][5124] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"007e272b-6dc6-4988-a54b-c1109f76b258", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb", Pod:"coredns-6f6b679f8f-pwlb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d39d3d1167", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.607 [INFO][5124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.607 [INFO][5124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" iface="eth0" netns="" Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.607 [INFO][5124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.607 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.629 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.629 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.629 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.639 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.639 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.642 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:08.647429 containerd[1508]: 2025-04-30 03:40:08.645 [INFO][5124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:40:08.648720 containerd[1508]: time="2025-04-30T03:40:08.647498668Z" level=info msg="TearDown network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\" successfully" Apr 30 03:40:08.648720 containerd[1508]: time="2025-04-30T03:40:08.647524428Z" level=info msg="StopPodSandbox for \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\" returns successfully" Apr 30 03:40:08.648720 containerd[1508]: time="2025-04-30T03:40:08.648044139Z" level=info msg="RemovePodSandbox for \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\"" Apr 30 03:40:08.648720 containerd[1508]: time="2025-04-30T03:40:08.648068257Z" level=info msg="Forcibly stopping sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\"" Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.681 [WARNING][5149] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"007e272b-6dc6-4988-a54b-c1109f76b258", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"1306a518e445044c8a055216429c6bef8f0dbf95bac19e494b5efe8abd09aabb", Pod:"coredns-6f6b679f8f-pwlb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d39d3d1167", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.681 [INFO][5149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.682 [INFO][5149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" iface="eth0" netns="" Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.682 [INFO][5149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.682 [INFO][5149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.703 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.703 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.704 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.709 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.709 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" HandleID="k8s-pod-network.c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--pwlb4-eth0" Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.710 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:08.714384 containerd[1508]: 2025-04-30 03:40:08.712 [INFO][5149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0" Apr 30 03:40:08.714944 containerd[1508]: time="2025-04-30T03:40:08.714417009Z" level=info msg="TearDown network for sandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\" successfully" Apr 30 03:40:08.718015 containerd[1508]: time="2025-04-30T03:40:08.717945221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:40:08.718015 containerd[1508]: time="2025-04-30T03:40:08.718017192Z" level=info msg="RemovePodSandbox \"c5f16cc403c972fb136ce2a098644dbdcb9f27d76ee7dcad73ce7af45dc64ab0\" returns successfully" Apr 30 03:40:08.718525 containerd[1508]: time="2025-04-30T03:40:08.718500173Z" level=info msg="StopPodSandbox for \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\"" Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.756 [WARNING][5174] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0", GenerateName:"calico-apiserver-7665d48578-", Namespace:"calico-apiserver", SelfLink:"", UID:"919a621f-1bcc-46da-9865-aeb53a85cd70", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7665d48578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae", Pod:"calico-apiserver-7665d48578-kr6rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd2efb4b80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.756 [INFO][5174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.756 [INFO][5174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" iface="eth0" netns="" Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.756 [INFO][5174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.756 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.780 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.780 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.780 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.798 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.799 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.801 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:08.806000 containerd[1508]: 2025-04-30 03:40:08.803 [INFO][5174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:40:08.809064 containerd[1508]: time="2025-04-30T03:40:08.806579990Z" level=info msg="TearDown network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\" successfully" Apr 30 03:40:08.809064 containerd[1508]: time="2025-04-30T03:40:08.806609467Z" level=info msg="StopPodSandbox for \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\" returns successfully" Apr 30 03:40:08.809064 containerd[1508]: time="2025-04-30T03:40:08.807163586Z" level=info msg="RemovePodSandbox for \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\"" Apr 30 03:40:08.809064 containerd[1508]: time="2025-04-30T03:40:08.807192793Z" level=info msg="Forcibly stopping sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\"" Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.840 [WARNING][5199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0", GenerateName:"calico-apiserver-7665d48578-", Namespace:"calico-apiserver", SelfLink:"", UID:"919a621f-1bcc-46da-9865-aeb53a85cd70", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7665d48578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"1ffe25b18c791008349b8aa355e76b30cad2bd671f0572fc15dbf5e8b2a22cae", Pod:"calico-apiserver-7665d48578-kr6rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd2efb4b80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.841 [INFO][5199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.841 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" iface="eth0" netns="" Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.841 [INFO][5199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.841 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.859 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.859 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.859 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.867 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.867 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" HandleID="k8s-pod-network.ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--kr6rd-eth0" Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.869 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:08.873014 containerd[1508]: 2025-04-30 03:40:08.871 [INFO][5199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb" Apr 30 03:40:08.873447 containerd[1508]: time="2025-04-30T03:40:08.873098002Z" level=info msg="TearDown network for sandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\" successfully" Apr 30 03:40:08.877256 containerd[1508]: time="2025-04-30T03:40:08.877210192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:40:08.877346 containerd[1508]: time="2025-04-30T03:40:08.877266421Z" level=info msg="RemovePodSandbox \"ce404cc13538209571022a21e8d3791e165d49967a330eb19bf3339097b293eb\" returns successfully" Apr 30 03:40:08.877675 containerd[1508]: time="2025-04-30T03:40:08.877651903Z" level=info msg="StopPodSandbox for \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\"" Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.917 [WARNING][5225] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"76c02922-bd45-4555-87c6-4dd092116a95", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103", Pod:"coredns-6f6b679f8f-9gr6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f3b6903513", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.917 [INFO][5225] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.917 [INFO][5225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" iface="eth0" netns="" Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.917 [INFO][5225] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.917 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.940 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.940 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.940 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.947 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.947 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.948 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:08.953131 containerd[1508]: 2025-04-30 03:40:08.951 [INFO][5225] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:40:08.954961 containerd[1508]: time="2025-04-30T03:40:08.953152399Z" level=info msg="TearDown network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\" successfully" Apr 30 03:40:08.954961 containerd[1508]: time="2025-04-30T03:40:08.953210023Z" level=info msg="StopPodSandbox for \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\" returns successfully" Apr 30 03:40:08.954961 containerd[1508]: time="2025-04-30T03:40:08.954059226Z" level=info msg="RemovePodSandbox for \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\"" Apr 30 03:40:08.954961 containerd[1508]: time="2025-04-30T03:40:08.954093854Z" level=info msg="Forcibly stopping sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\"" Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:08.994 [WARNING][5250] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"76c02922-bd45-4555-87c6-4dd092116a95", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"011dc79c5b8eae9d1336571bab261d11b995ef7d8331eb0c9ef9ed7e6f3f1103", Pod:"coredns-6f6b679f8f-9gr6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f3b6903513", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:08.995 [INFO][5250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:08.995 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" iface="eth0" netns="" Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:08.995 [INFO][5250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:08.995 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:09.013 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:09.013 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:09.013 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:09.021 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:09.021 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" HandleID="k8s-pod-network.9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Workload="ci--4081--3--3--9--916214001e-k8s-coredns--6f6b679f8f--9gr6g-eth0" Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:09.022 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:09.025700 containerd[1508]: 2025-04-30 03:40:09.024 [INFO][5250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb" Apr 30 03:40:09.026890 containerd[1508]: time="2025-04-30T03:40:09.025743143Z" level=info msg="TearDown network for sandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\" successfully" Apr 30 03:40:09.047706 containerd[1508]: time="2025-04-30T03:40:09.047632697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:40:09.047706 containerd[1508]: time="2025-04-30T03:40:09.047712543Z" level=info msg="RemovePodSandbox \"9ea74ed8269d706fb8a7f7ad70615e30b1d64dbec3d128c2a27c1727c172a5fb\" returns successfully" Apr 30 03:40:09.048288 containerd[1508]: time="2025-04-30T03:40:09.048259759Z" level=info msg="StopPodSandbox for \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\"" Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.088 [WARNING][5275] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0", GenerateName:"calico-apiserver-7665d48578-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d34c4-8951-46b2-a56c-5044ac1ce046", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7665d48578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594", Pod:"calico-apiserver-7665d48578-fgcbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f72881c5f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.088 [INFO][5275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.088 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" iface="eth0" netns="" Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.088 [INFO][5275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.088 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.109 [INFO][5283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.109 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.109 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.114 [WARNING][5283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.115 [INFO][5283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.117 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:09.121373 containerd[1508]: 2025-04-30 03:40:09.119 [INFO][5275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:40:09.122688 containerd[1508]: time="2025-04-30T03:40:09.121399998Z" level=info msg="TearDown network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\" successfully" Apr 30 03:40:09.122688 containerd[1508]: time="2025-04-30T03:40:09.121449413Z" level=info msg="StopPodSandbox for \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\" returns successfully" Apr 30 03:40:09.122688 containerd[1508]: time="2025-04-30T03:40:09.122188763Z" level=info msg="RemovePodSandbox for \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\"" Apr 30 03:40:09.122688 containerd[1508]: time="2025-04-30T03:40:09.122226858Z" level=info msg="Forcibly stopping sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\"" Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.159 [WARNING][5302] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0", GenerateName:"calico-apiserver-7665d48578-", Namespace:"calico-apiserver", SelfLink:"", UID:"da5d34c4-8951-46b2-a56c-5044ac1ce046", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7665d48578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"b3437715e0b3fd3750f058f329591a3c46de53509ad37d358e4f5186b7057594", Pod:"calico-apiserver-7665d48578-fgcbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f72881c5f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.159 [INFO][5302] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.159 [INFO][5302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" iface="eth0" netns="" Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.159 [INFO][5302] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.159 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.186 [INFO][5309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.187 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.187 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.192 [WARNING][5309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.192 [INFO][5309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" HandleID="k8s-pod-network.0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Workload="ci--4081--3--3--9--916214001e-k8s-calico--apiserver--7665d48578--fgcbp-eth0" Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.193 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:09.197017 containerd[1508]: 2025-04-30 03:40:09.195 [INFO][5302] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f" Apr 30 03:40:09.198155 containerd[1508]: time="2025-04-30T03:40:09.197055165Z" level=info msg="TearDown network for sandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\" successfully" Apr 30 03:40:09.200610 containerd[1508]: time="2025-04-30T03:40:09.200577998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:40:09.200963 containerd[1508]: time="2025-04-30T03:40:09.200641391Z" level=info msg="RemovePodSandbox \"0552ce0f96e4e32a9ab0453f100791a03a43dc972ae059c19ab71c99d1b6cd4f\" returns successfully" Apr 30 03:40:09.201352 containerd[1508]: time="2025-04-30T03:40:09.201111557Z" level=info msg="StopPodSandbox for \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\"" Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.234 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0", GenerateName:"calico-kube-controllers-5c4fd78ccc-", Namespace:"calico-system", SelfLink:"", UID:"5cbd4a0f-2d60-4cb4-95f0-be6674a91f19", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4fd78ccc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322", Pod:"calico-kube-controllers-5c4fd78ccc-2pshn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f6562eba1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.235 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.235 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" iface="eth0" netns="" Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.235 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.235 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.253 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.253 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.253 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.259 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.259 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.260 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:09.264600 containerd[1508]: 2025-04-30 03:40:09.262 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:40:09.265962 containerd[1508]: time="2025-04-30T03:40:09.264648186Z" level=info msg="TearDown network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\" successfully" Apr 30 03:40:09.265962 containerd[1508]: time="2025-04-30T03:40:09.264673315Z" level=info msg="StopPodSandbox for \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\" returns successfully" Apr 30 03:40:09.265962 containerd[1508]: time="2025-04-30T03:40:09.265497060Z" level=info msg="RemovePodSandbox for \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\"" Apr 30 03:40:09.265962 containerd[1508]: time="2025-04-30T03:40:09.265526978Z" level=info msg="Forcibly stopping sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\"" Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.302 [WARNING][5353] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0", GenerateName:"calico-kube-controllers-5c4fd78ccc-", Namespace:"calico-system", SelfLink:"", UID:"5cbd4a0f-2d60-4cb4-95f0-be6674a91f19", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 39, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4fd78ccc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-9-916214001e", ContainerID:"f6fd1a66cf272971e919ee6a35fd671ef83bd8bda4aeda22466b4a16c5576322", Pod:"calico-kube-controllers-5c4fd78ccc-2pshn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f6562eba1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.303 [INFO][5353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.303 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" iface="eth0" netns="" Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.303 [INFO][5353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.303 [INFO][5353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.323 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.323 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.323 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.330 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.330 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" HandleID="k8s-pod-network.b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Workload="ci--4081--3--3--9--916214001e-k8s-calico--kube--controllers--5c4fd78ccc--2pshn-eth0" Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.332 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:40:09.336608 containerd[1508]: 2025-04-30 03:40:09.334 [INFO][5353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440" Apr 30 03:40:09.337148 containerd[1508]: time="2025-04-30T03:40:09.336677139Z" level=info msg="TearDown network for sandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\" successfully" Apr 30 03:40:09.341553 containerd[1508]: time="2025-04-30T03:40:09.341500876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:40:09.341632 containerd[1508]: time="2025-04-30T03:40:09.341572426Z" level=info msg="RemovePodSandbox \"b2493c92ffca5883693a3bb878b43c36b72ea757c14b7050c609b5c44941f440\" returns successfully" Apr 30 03:40:09.871251 kubelet[2724]: I0430 03:40:09.870574 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:40:49.201509 systemd[1]: run-containerd-runc-k8s.io-5965386147ed504c78fe191b242bc5380b8b03d163aaa8d16eba7da0270a6f93-runc.MJRQLS.mount: Deactivated successfully. Apr 30 03:41:19.199538 systemd[1]: run-containerd-runc-k8s.io-5965386147ed504c78fe191b242bc5380b8b03d163aaa8d16eba7da0270a6f93-runc.E2OxUd.mount: Deactivated successfully. Apr 30 03:41:34.506095 systemd[1]: run-containerd-runc-k8s.io-104ea7c262acc49ea14c22c9f38ae6e8ab88011685ea8b3ef321d601b27517f5-runc.tXTfei.mount: Deactivated successfully. Apr 30 03:42:17.365973 update_engine[1485]: I20250430 03:42:17.365878 1485 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 03:42:17.365973 update_engine[1485]: I20250430 03:42:17.365957 1485 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 03:42:17.370478 update_engine[1485]: I20250430 03:42:17.369378 1485 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 03:42:17.370478 update_engine[1485]: I20250430 03:42:17.370178 1485 omaha_request_params.cc:62] Current group set to lts Apr 30 03:42:17.371111 update_engine[1485]: I20250430 03:42:17.370921 1485 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 03:42:17.371111 update_engine[1485]: I20250430 03:42:17.370944 1485 update_attempter.cc:643] Scheduling an action processor start. Apr 30 03:42:17.371111 update_engine[1485]: I20250430 03:42:17.370966 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:42:17.371111 update_engine[1485]: I20250430 03:42:17.371011 1485 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 03:42:17.371111 update_engine[1485]: I20250430 03:42:17.371084 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:42:17.371111 update_engine[1485]: I20250430 03:42:17.371096 1485 omaha_request_action.cc:272] Request: Apr 30 03:42:17.371111 update_engine[1485]: Apr 30 03:42:17.371111 update_engine[1485]: Apr 30 03:42:17.371111 update_engine[1485]: Apr 30 03:42:17.371111 update_engine[1485]: Apr 30 03:42:17.371111 update_engine[1485]: Apr 30 03:42:17.371111 update_engine[1485]: Apr 30 03:42:17.371111 update_engine[1485]: Apr 30 03:42:17.371111 update_engine[1485]: Apr 30 03:42:17.371111 update_engine[1485]: I20250430 03:42:17.371105 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:42:17.389795 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 03:42:17.391998 update_engine[1485]: I20250430 03:42:17.391964 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:42:17.392265 update_engine[1485]: I20250430 03:42:17.392235 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:42:17.393438 update_engine[1485]: E20250430 03:42:17.393332 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:42:17.393438 update_engine[1485]: I20250430 03:42:17.393409 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 03:42:27.238003 update_engine[1485]: I20250430 03:42:27.237900 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:42:27.238631 update_engine[1485]: I20250430 03:42:27.238242 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:42:27.238631 update_engine[1485]: I20250430 03:42:27.238597 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:42:27.239429 update_engine[1485]: E20250430 03:42:27.239377 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:42:27.239511 update_engine[1485]: I20250430 03:42:27.239454 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 03:42:37.236963 update_engine[1485]: I20250430 03:42:37.236857 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:42:37.238270 update_engine[1485]: I20250430 03:42:37.237224 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:42:37.238270 update_engine[1485]: I20250430 03:42:37.237525 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:42:37.238534 update_engine[1485]: E20250430 03:42:37.238477 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:42:37.238584 update_engine[1485]: I20250430 03:42:37.238561 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 03:42:47.237246 update_engine[1485]: I20250430 03:42:47.237115 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:42:47.237647 update_engine[1485]: I20250430 03:42:47.237327 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:42:47.237647 update_engine[1485]: I20250430 03:42:47.237514 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:42:47.238254 update_engine[1485]: E20250430 03:42:47.238220 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:42:47.238312 update_engine[1485]: I20250430 03:42:47.238258 1485 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:42:47.238312 update_engine[1485]: I20250430 03:42:47.238265 1485 omaha_request_action.cc:617] Omaha request response: Apr 30 03:42:47.239623 update_engine[1485]: E20250430 03:42:47.238328 1485 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 03:42:47.241650 update_engine[1485]: I20250430 03:42:47.241600 1485 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 03:42:47.241650 update_engine[1485]: I20250430 03:42:47.241631 1485 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:42:47.241650 update_engine[1485]: I20250430 03:42:47.241641 1485 update_attempter.cc:306] Processing Done. Apr 30 03:42:47.241760 update_engine[1485]: E20250430 03:42:47.241662 1485 update_attempter.cc:619] Update failed. Apr 30 03:42:47.241760 update_engine[1485]: I20250430 03:42:47.241670 1485 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 03:42:47.241760 update_engine[1485]: I20250430 03:42:47.241679 1485 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 03:42:47.241760 update_engine[1485]: I20250430 03:42:47.241688 1485 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 03:42:47.241894 update_engine[1485]: I20250430 03:42:47.241791 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:42:47.241894 update_engine[1485]: I20250430 03:42:47.241850 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:42:47.241894 update_engine[1485]: I20250430 03:42:47.241860 1485 omaha_request_action.cc:272] Request: Apr 30 03:42:47.241894 update_engine[1485]: Apr 30 03:42:47.241894 update_engine[1485]: Apr 30 03:42:47.241894 update_engine[1485]: Apr 30 03:42:47.241894 update_engine[1485]: Apr 30 03:42:47.241894 update_engine[1485]: Apr 30 03:42:47.241894 update_engine[1485]: Apr 30 03:42:47.241894 update_engine[1485]: I20250430 03:42:47.241869 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:42:47.242207 update_engine[1485]: I20250430 03:42:47.242085 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:42:47.242616 update_engine[1485]: I20250430 03:42:47.242542 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:42:47.242675 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 03:42:47.243509 update_engine[1485]: E20250430 03:42:47.243463 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:42:47.243559 update_engine[1485]: I20250430 03:42:47.243534 1485 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:42:47.243559 update_engine[1485]: I20250430 03:42:47.243546 1485 omaha_request_action.cc:617] Omaha request response: Apr 30 03:42:47.243603 update_engine[1485]: I20250430 03:42:47.243556 1485 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:42:47.243603 update_engine[1485]: I20250430 03:42:47.243566 1485 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:42:47.243603 update_engine[1485]: I20250430 03:42:47.243573 1485 update_attempter.cc:306] Processing Done. Apr 30 03:42:47.243603 update_engine[1485]: I20250430 03:42:47.243582 1485 update_attempter.cc:310] Error event sent. Apr 30 03:42:47.243705 update_engine[1485]: I20250430 03:42:47.243595 1485 update_check_scheduler.cc:74] Next update check in 43m57s Apr 30 03:42:47.243977 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 03:43:04.506004 systemd[1]: run-containerd-runc-k8s.io-104ea7c262acc49ea14c22c9f38ae6e8ab88011685ea8b3ef321d601b27517f5-runc.MwRjv4.mount: Deactivated successfully. Apr 30 03:43:48.110351 systemd[1]: Started sshd@7-37.27.214.59:22-139.178.68.195:54088.service - OpenSSH per-connection server daemon (139.178.68.195:54088). Apr 30 03:43:49.110887 sshd[5814]: Accepted publickey for core from 139.178.68.195 port 54088 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:43:49.113487 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:43:49.120488 systemd-logind[1480]: New session 8 of user core. Apr 30 03:43:49.125087 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:43:50.567077 sshd[5814]: pam_unix(sshd:session): session closed for user core Apr 30 03:43:50.572952 systemd[1]: sshd@7-37.27.214.59:22-139.178.68.195:54088.service: Deactivated successfully. Apr 30 03:43:50.575535 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:43:50.578212 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:43:50.580522 systemd-logind[1480]: Removed session 8. Apr 30 03:43:55.741837 systemd[1]: Started sshd@8-37.27.214.59:22-139.178.68.195:49724.service - OpenSSH per-connection server daemon (139.178.68.195:49724). Apr 30 03:43:56.762345 sshd[5850]: Accepted publickey for core from 139.178.68.195 port 49724 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:43:56.765537 sshd[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:43:56.772252 systemd-logind[1480]: New session 9 of user core. Apr 30 03:43:56.777050 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:43:57.590020 sshd[5850]: pam_unix(sshd:session): session closed for user core Apr 30 03:43:57.595987 systemd[1]: sshd@8-37.27.214.59:22-139.178.68.195:49724.service: Deactivated successfully. Apr 30 03:43:57.599726 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:43:57.601307 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:43:57.603503 systemd-logind[1480]: Removed session 9. Apr 30 03:43:57.761743 systemd[1]: Started sshd@9-37.27.214.59:22-139.178.68.195:49728.service - OpenSSH per-connection server daemon (139.178.68.195:49728). Apr 30 03:43:58.754038 sshd[5865]: Accepted publickey for core from 139.178.68.195 port 49728 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:43:58.756606 sshd[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:43:58.765907 systemd-logind[1480]: New session 10 of user core. Apr 30 03:43:58.772126 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:43:59.623681 sshd[5865]: pam_unix(sshd:session): session closed for user core Apr 30 03:43:59.628462 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:43:59.629380 systemd[1]: sshd@9-37.27.214.59:22-139.178.68.195:49728.service: Deactivated successfully. Apr 30 03:43:59.632442 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:43:59.634445 systemd-logind[1480]: Removed session 10. Apr 30 03:43:59.797457 systemd[1]: Started sshd@10-37.27.214.59:22-139.178.68.195:49744.service - OpenSSH per-connection server daemon (139.178.68.195:49744). Apr 30 03:44:00.803883 sshd[5876]: Accepted publickey for core from 139.178.68.195 port 49744 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:44:00.805189 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:44:00.811008 systemd-logind[1480]: New session 11 of user core. Apr 30 03:44:00.816077 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:44:01.577951 sshd[5876]: pam_unix(sshd:session): session closed for user core Apr 30 03:44:01.585153 systemd[1]: sshd@10-37.27.214.59:22-139.178.68.195:49744.service: Deactivated successfully. Apr 30 03:44:01.593929 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:44:01.598538 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:44:01.600776 systemd-logind[1480]: Removed session 11. Apr 30 03:44:04.560025 systemd[1]: run-containerd-runc-k8s.io-104ea7c262acc49ea14c22c9f38ae6e8ab88011685ea8b3ef321d601b27517f5-runc.eAFNK1.mount: Deactivated successfully. Apr 30 03:44:06.749702 systemd[1]: Started sshd@11-37.27.214.59:22-139.178.68.195:44626.service - OpenSSH per-connection server daemon (139.178.68.195:44626). Apr 30 03:44:07.750275 sshd[5914]: Accepted publickey for core from 139.178.68.195 port 44626 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:44:07.751430 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:44:07.756375 systemd-logind[1480]: New session 12 of user core. Apr 30 03:44:07.761056 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:44:08.512698 sshd[5914]: pam_unix(sshd:session): session closed for user core Apr 30 03:44:08.517805 systemd[1]: sshd@11-37.27.214.59:22-139.178.68.195:44626.service: Deactivated successfully. Apr 30 03:44:08.521198 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:44:08.522682 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:44:08.524339 systemd-logind[1480]: Removed session 12. Apr 30 03:44:08.688156 systemd[1]: Started sshd@12-37.27.214.59:22-139.178.68.195:44628.service - OpenSSH per-connection server daemon (139.178.68.195:44628). Apr 30 03:44:09.669981 sshd[5948]: Accepted publickey for core from 139.178.68.195 port 44628 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:44:09.672174 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:44:09.678272 systemd-logind[1480]: New session 13 of user core. Apr 30 03:44:09.683008 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:44:10.681449 sshd[5948]: pam_unix(sshd:session): session closed for user core Apr 30 03:44:10.687677 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:44:10.688367 systemd[1]: sshd@12-37.27.214.59:22-139.178.68.195:44628.service: Deactivated successfully. Apr 30 03:44:10.691010 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:44:10.692186 systemd-logind[1480]: Removed session 13. Apr 30 03:44:10.856373 systemd[1]: Started sshd@13-37.27.214.59:22-139.178.68.195:44642.service - OpenSSH per-connection server daemon (139.178.68.195:44642). Apr 30 03:44:11.867302 sshd[5962]: Accepted publickey for core from 139.178.68.195 port 44642 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:44:11.869572 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:44:11.876984 systemd-logind[1480]: New session 14 of user core. Apr 30 03:44:11.883040 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:44:14.909324 sshd[5962]: pam_unix(sshd:session): session closed for user core Apr 30 03:44:14.921464 systemd[1]: sshd@13-37.27.214.59:22-139.178.68.195:44642.service: Deactivated successfully. Apr 30 03:44:14.924556 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:44:14.926133 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:44:14.928330 systemd-logind[1480]: Removed session 14. Apr 30 03:44:15.080578 systemd[1]: Started sshd@14-37.27.214.59:22-139.178.68.195:44648.service - OpenSSH per-connection server daemon (139.178.68.195:44648). Apr 30 03:44:16.098700 sshd[5982]: Accepted publickey for core from 139.178.68.195 port 44648 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:44:16.102359 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:44:16.106860 systemd-logind[1480]: New session 15 of user core. Apr 30 03:44:16.109947 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:44:17.529430 sshd[5982]: pam_unix(sshd:session): session closed for user core Apr 30 03:44:17.532850 systemd[1]: sshd@14-37.27.214.59:22-139.178.68.195:44648.service: Deactivated successfully. Apr 30 03:44:17.535248 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:44:17.536749 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:44:17.538774 systemd-logind[1480]: Removed session 15. Apr 30 03:44:17.703262 systemd[1]: Started sshd@15-37.27.214.59:22-139.178.68.195:54176.service - OpenSSH per-connection server daemon (139.178.68.195:54176). Apr 30 03:44:18.693630 sshd[5998]: Accepted publickey for core from 139.178.68.195 port 54176 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:44:18.695621 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:44:18.701908 systemd-logind[1480]: New session 16 of user core. Apr 30 03:44:18.709047 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:44:19.544695 sshd[5998]: pam_unix(sshd:session): session closed for user core Apr 30 03:44:19.549731 systemd[1]: sshd@15-37.27.214.59:22-139.178.68.195:54176.service: Deactivated successfully. Apr 30 03:44:19.552970 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:44:19.554547 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:44:19.555993 systemd-logind[1480]: Removed session 16. Apr 30 03:44:24.715313 systemd[1]: Started sshd@16-37.27.214.59:22-139.178.68.195:54190.service - OpenSSH per-connection server daemon (139.178.68.195:54190). Apr 30 03:44:25.733381 sshd[6050]: Accepted publickey for core from 139.178.68.195 port 54190 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:44:25.735951 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:44:25.741117 systemd-logind[1480]: New session 17 of user core. Apr 30 03:44:25.745040 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:44:26.477394 sshd[6050]: pam_unix(sshd:session): session closed for user core Apr 30 03:44:26.481843 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:44:26.482579 systemd[1]: sshd@16-37.27.214.59:22-139.178.68.195:54190.service: Deactivated successfully. Apr 30 03:44:26.484378 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:44:26.485990 systemd-logind[1480]: Removed session 17. Apr 30 03:44:31.651364 systemd[1]: Started sshd@17-37.27.214.59:22-139.178.68.195:35450.service - OpenSSH per-connection server daemon (139.178.68.195:35450). Apr 30 03:44:32.644102 sshd[6064]: Accepted publickey for core from 139.178.68.195 port 35450 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:44:32.645870 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:44:32.653254 systemd-logind[1480]: New session 18 of user core. Apr 30 03:44:32.658054 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:44:33.590599 sshd[6064]: pam_unix(sshd:session): session closed for user core Apr 30 03:44:33.595557 systemd[1]: sshd@17-37.27.214.59:22-139.178.68.195:35450.service: Deactivated successfully. Apr 30 03:44:33.602966 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:44:33.608096 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:44:33.610529 systemd-logind[1480]: Removed session 18. Apr 30 03:44:48.863716 systemd[1]: cri-containerd-cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc.scope: Deactivated successfully. Apr 30 03:44:48.864871 systemd[1]: cri-containerd-cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc.scope: Consumed 2.100s CPU time, 17.1M memory peak, 0B memory swap peak. Apr 30 03:44:48.926751 kubelet[2724]: E0430 03:44:48.926426 2724 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60590->10.0.0.2:2379: read: connection timed out" Apr 30 03:44:49.025348 systemd[1]: cri-containerd-b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59.scope: Deactivated successfully. Apr 30 03:44:49.025911 systemd[1]: cri-containerd-b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59.scope: Consumed 6.482s CPU time, 19.5M memory peak, 0B memory swap peak. Apr 30 03:44:49.099947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59-rootfs.mount: Deactivated successfully. Apr 30 03:44:49.106213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc-rootfs.mount: Deactivated successfully. Apr 30 03:44:49.159203 containerd[1508]: time="2025-04-30T03:44:49.124059087Z" level=info msg="shim disconnected" id=b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59 namespace=k8s.io Apr 30 03:44:49.159203 containerd[1508]: time="2025-04-30T03:44:49.117355924Z" level=info msg="shim disconnected" id=cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc namespace=k8s.io Apr 30 03:44:49.165313 containerd[1508]: time="2025-04-30T03:44:49.165274904Z" level=warning msg="cleaning up after shim disconnected" id=cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc namespace=k8s.io Apr 30 03:44:49.165418 containerd[1508]: time="2025-04-30T03:44:49.165401059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:44:49.166436 containerd[1508]: time="2025-04-30T03:44:49.166389042Z" level=warning msg="cleaning up after shim disconnected" id=b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59 namespace=k8s.io Apr 30 03:44:49.166436 containerd[1508]: time="2025-04-30T03:44:49.166415464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:44:49.483890 kubelet[2724]: I0430 03:44:49.483587 2724 scope.go:117] "RemoveContainer" containerID="b03130e4419dcba60d4d1d037c16eee2c565fd80eec6d27a4308799c38fc3a59" Apr 30 03:44:49.525794 containerd[1508]: time="2025-04-30T03:44:49.525715048Z" level=info msg="CreateContainer within sandbox \"1978e7ce3042b8588e76911aac12d975b2bbad9e58e9d8c66a9635ec9db04578\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:44:49.762310 containerd[1508]: time="2025-04-30T03:44:49.761731513Z" level=info msg="CreateContainer within sandbox \"1978e7ce3042b8588e76911aac12d975b2bbad9e58e9d8c66a9635ec9db04578\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e233acace8940b058a88f306b4b4b5eda2eea622a3d22e789e2cf59e0f4ad72b\"" Apr 30 03:44:49.768299 containerd[1508]: time="2025-04-30T03:44:49.768203077Z" level=info msg="StartContainer for \"e233acace8940b058a88f306b4b4b5eda2eea622a3d22e789e2cf59e0f4ad72b\"" Apr 30 03:44:49.838982 systemd[1]: Started cri-containerd-e233acace8940b058a88f306b4b4b5eda2eea622a3d22e789e2cf59e0f4ad72b.scope - libcontainer container e233acace8940b058a88f306b4b4b5eda2eea622a3d22e789e2cf59e0f4ad72b. Apr 30 03:44:49.892054 containerd[1508]: time="2025-04-30T03:44:49.891997016Z" level=info msg="StartContainer for \"e233acace8940b058a88f306b4b4b5eda2eea622a3d22e789e2cf59e0f4ad72b\" returns successfully" Apr 30 03:44:49.989347 systemd[1]: cri-containerd-fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3.scope: Deactivated successfully. Apr 30 03:44:49.989879 systemd[1]: cri-containerd-fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3.scope: Consumed 5.280s CPU time. Apr 30 03:44:50.010221 containerd[1508]: time="2025-04-30T03:44:50.010128667Z" level=info msg="shim disconnected" id=fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3 namespace=k8s.io Apr 30 03:44:50.010221 containerd[1508]: time="2025-04-30T03:44:50.010197601Z" level=warning msg="cleaning up after shim disconnected" id=fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3 namespace=k8s.io Apr 30 03:44:50.010221 containerd[1508]: time="2025-04-30T03:44:50.010204625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:44:50.102610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898711421.mount: Deactivated successfully. Apr 30 03:44:50.103281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3-rootfs.mount: Deactivated successfully. Apr 30 03:44:50.469717 kubelet[2724]: I0430 03:44:50.469333 2724 scope.go:117] "RemoveContainer" containerID="cf2a07f15c1984b8e5752ef81db5d32d417f9ae460342a98838e32fe2506d1dc" Apr 30 03:44:50.471268 kubelet[2724]: I0430 03:44:50.471174 2724 scope.go:117] "RemoveContainer" containerID="fea9cf58420734365a43a1aed495d6c33bf5e1ea65d7426af4bcd332b951c3a3" Apr 30 03:44:50.471984 containerd[1508]: time="2025-04-30T03:44:50.471937770Z" level=info msg="CreateContainer within sandbox \"90aa3755e08297251359aa303b205c9be9257798f293df4b159295d1c7b43b58\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:44:50.476630 containerd[1508]: time="2025-04-30T03:44:50.476600752Z" level=info msg="CreateContainer within sandbox \"39e83a759c895a08c1ed72c3e27a41f7260a3e25bb6aee390754d8847316e8b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 03:44:50.509476 containerd[1508]: time="2025-04-30T03:44:50.509429047Z" level=info msg="CreateContainer within sandbox \"39e83a759c895a08c1ed72c3e27a41f7260a3e25bb6aee390754d8847316e8b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"12bb5ab177d3296e9b5bd0946de1c222a08ceef445eba0254d946cc1b9d7f3bf\"" Apr 30 03:44:50.512124 containerd[1508]: time="2025-04-30T03:44:50.512090833Z" level=info msg="StartContainer for \"12bb5ab177d3296e9b5bd0946de1c222a08ceef445eba0254d946cc1b9d7f3bf\"" Apr 30 03:44:50.512623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126403216.mount: Deactivated successfully. Apr 30 03:44:50.512749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887472564.mount: Deactivated successfully. Apr 30 03:44:50.516620 containerd[1508]: time="2025-04-30T03:44:50.516581892Z" level=info msg="CreateContainer within sandbox \"90aa3755e08297251359aa303b205c9be9257798f293df4b159295d1c7b43b58\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4e8ba55d7bf58e1e32a24b4a27684e175c3904d41f98c14df9f3152d5dc5d6b9\"" Apr 30 03:44:50.517540 containerd[1508]: time="2025-04-30T03:44:50.517148158Z" level=info msg="StartContainer for \"4e8ba55d7bf58e1e32a24b4a27684e175c3904d41f98c14df9f3152d5dc5d6b9\"" Apr 30 03:44:50.563013 systemd[1]: Started cri-containerd-4e8ba55d7bf58e1e32a24b4a27684e175c3904d41f98c14df9f3152d5dc5d6b9.scope - libcontainer container 4e8ba55d7bf58e1e32a24b4a27684e175c3904d41f98c14df9f3152d5dc5d6b9. Apr 30 03:44:50.566802 systemd[1]: Started cri-containerd-12bb5ab177d3296e9b5bd0946de1c222a08ceef445eba0254d946cc1b9d7f3bf.scope - libcontainer container 12bb5ab177d3296e9b5bd0946de1c222a08ceef445eba0254d946cc1b9d7f3bf. Apr 30 03:44:50.594927 containerd[1508]: time="2025-04-30T03:44:50.594893945Z" level=info msg="StartContainer for \"12bb5ab177d3296e9b5bd0946de1c222a08ceef445eba0254d946cc1b9d7f3bf\" returns successfully" Apr 30 03:44:50.622076 containerd[1508]: time="2025-04-30T03:44:50.621907959Z" level=info msg="StartContainer for \"4e8ba55d7bf58e1e32a24b4a27684e175c3904d41f98c14df9f3152d5dc5d6b9\" returns successfully" Apr 30 03:44:54.361941 kubelet[2724]: E0430 03:44:54.357736 2724 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60406->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-9-916214001e.183afbd1bdfe2194 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-9-916214001e,UID:67830cb85a2ba31982b07d85ed4f003c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-9-916214001e,},FirstTimestamp:2025-04-30 03:44:43.813773716 +0000 UTC m=+335.918134815,LastTimestamp:2025-04-30 03:44:43.813773716 +0000 UTC m=+335.918134815,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-9-916214001e,}"