Apr 30 01:18:25.906120 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 01:18:25.906146 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 01:18:25.906156 kernel: KASLR enabled Apr 30 01:18:25.906163 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 01:18:25.906169 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 30 01:18:25.906174 kernel: random: crng init done Apr 30 01:18:25.906181 kernel: ACPI: Early table checksum verification disabled Apr 30 01:18:25.906187 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 01:18:25.906193 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 01:18:25.906201 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906206 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906212 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906218 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906224 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906232 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906240 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906246 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906253 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:25.906259 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 01:18:25.906265 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 01:18:25.906272 kernel: NUMA: Failed to initialise from firmware Apr 30 01:18:25.906278 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 01:18:25.906284 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 01:18:25.906290 kernel: Zone ranges: Apr 30 01:18:25.906296 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 01:18:25.906304 kernel: DMA32 empty Apr 30 01:18:25.906310 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 01:18:25.906316 kernel: Movable zone start for each node Apr 30 01:18:25.906323 kernel: Early memory node ranges Apr 30 01:18:25.906329 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 30 01:18:25.906336 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 01:18:25.906342 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 01:18:25.906348 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 01:18:25.906354 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 01:18:25.906361 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 01:18:25.906367 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 01:18:25.906373 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 01:18:25.906381 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 01:18:25.906387 kernel: psci: probing for conduit method from ACPI. Apr 30 01:18:25.906393 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 01:18:25.906402 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 01:18:25.906409 kernel: psci: Trusted OS migration not required Apr 30 01:18:25.906415 kernel: psci: SMC Calling Convention v1.1 Apr 30 01:18:25.906424 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 01:18:25.906431 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 01:18:25.906437 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 01:18:25.906444 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 01:18:25.906451 kernel: Detected PIPT I-cache on CPU0 Apr 30 01:18:25.906457 kernel: CPU features: detected: GIC system register CPU interface Apr 30 01:18:25.906464 kernel: CPU features: detected: Hardware dirty bit management Apr 30 01:18:25.906470 kernel: CPU features: detected: Spectre-v4 Apr 30 01:18:25.906477 kernel: CPU features: detected: Spectre-BHB Apr 30 01:18:25.906484 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 01:18:25.906492 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 01:18:25.906499 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 01:18:25.906505 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 01:18:25.906512 kernel: alternatives: applying boot alternatives Apr 30 01:18:25.906520 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 01:18:25.906527 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 01:18:25.906533 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 01:18:25.906540 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 01:18:25.906547 kernel: Fallback order for Node 0: 0 Apr 30 01:18:25.906553 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 01:18:25.906560 kernel: Policy zone: Normal Apr 30 01:18:25.906568 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 01:18:25.906574 kernel: software IO TLB: area num 2. Apr 30 01:18:25.906581 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 01:18:25.906588 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) Apr 30 01:18:25.906595 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 01:18:25.906602 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 01:18:25.906609 kernel: rcu: RCU event tracing is enabled. Apr 30 01:18:25.906616 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 01:18:25.906623 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 01:18:25.906629 kernel: Tracing variant of Tasks RCU enabled. Apr 30 01:18:25.906636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 01:18:25.906644 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 01:18:25.906651 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 01:18:25.906657 kernel: GICv3: 256 SPIs implemented Apr 30 01:18:25.906664 kernel: GICv3: 0 Extended SPIs implemented Apr 30 01:18:25.906670 kernel: Root IRQ handler: gic_handle_irq Apr 30 01:18:25.906677 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 01:18:25.906683 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 01:18:25.906690 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 01:18:25.906697 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 01:18:25.906704 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 01:18:25.908779 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 01:18:25.908791 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 01:18:25.908803 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 01:18:25.908810 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 01:18:25.908817 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 01:18:25.908824 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 01:18:25.908831 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 01:18:25.908838 kernel: Console: colour dummy device 80x25 Apr 30 01:18:25.908845 kernel: ACPI: Core revision 20230628 Apr 30 01:18:25.908852 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 01:18:25.908859 kernel: pid_max: default: 32768 minimum: 301 Apr 30 01:18:25.908866 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 01:18:25.908874 kernel: landlock: Up and running. Apr 30 01:18:25.908881 kernel: SELinux: Initializing. Apr 30 01:18:25.908888 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 01:18:25.908895 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 01:18:25.908902 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 01:18:25.908909 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 01:18:25.908916 kernel: rcu: Hierarchical SRCU implementation. Apr 30 01:18:25.908924 kernel: rcu: Max phase no-delay instances is 400. Apr 30 01:18:25.908931 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 01:18:25.908939 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 01:18:25.908957 kernel: Remapping and enabling EFI services. Apr 30 01:18:25.908965 kernel: smp: Bringing up secondary CPUs ... Apr 30 01:18:25.908972 kernel: Detected PIPT I-cache on CPU1 Apr 30 01:18:25.908979 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 01:18:25.908986 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 01:18:25.908993 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 01:18:25.909000 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 01:18:25.909007 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 01:18:25.909013 kernel: SMP: Total of 2 processors activated. Apr 30 01:18:25.909023 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 01:18:25.909030 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 01:18:25.909042 kernel: CPU features: detected: Common not Private translations Apr 30 01:18:25.909051 kernel: CPU features: detected: CRC32 instructions Apr 30 01:18:25.909058 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 01:18:25.909065 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 01:18:25.909073 kernel: CPU features: detected: LSE atomic instructions Apr 30 01:18:25.909080 kernel: CPU features: detected: Privileged Access Never Apr 30 01:18:25.909087 kernel: CPU features: detected: RAS Extension Support Apr 30 01:18:25.909096 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 01:18:25.909103 kernel: CPU: All CPU(s) started at EL1 Apr 30 01:18:25.909110 kernel: alternatives: applying system-wide alternatives Apr 30 01:18:25.909117 kernel: devtmpfs: initialized Apr 30 01:18:25.909125 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 01:18:25.909132 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 01:18:25.909140 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 01:18:25.909148 kernel: SMBIOS 3.0.0 present. Apr 30 01:18:25.909156 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 01:18:25.909163 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 01:18:25.909170 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 01:18:25.909177 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 01:18:25.909185 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 01:18:25.909192 kernel: audit: initializing netlink subsys (disabled) Apr 30 01:18:25.909200 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Apr 30 01:18:25.909207 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 01:18:25.909216 kernel: cpuidle: using governor menu Apr 30 01:18:25.909223 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 01:18:25.909230 kernel: ASID allocator initialised with 32768 entries Apr 30 01:18:25.909238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 01:18:25.909245 kernel: Serial: AMBA PL011 UART driver Apr 30 01:18:25.909252 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 01:18:25.909259 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 01:18:25.909266 kernel: Modules: 509024 pages in range for PLT usage Apr 30 01:18:25.909274 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 01:18:25.909282 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 01:18:25.909290 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 01:18:25.909297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 01:18:25.909304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 01:18:25.909311 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 01:18:25.909318 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 01:18:25.909326 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 01:18:25.909333 kernel: ACPI: Added _OSI(Module Device) Apr 30 01:18:25.909340 kernel: ACPI: Added _OSI(Processor Device) Apr 30 01:18:25.909347 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 01:18:25.909356 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 01:18:25.909363 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 01:18:25.909370 kernel: ACPI: Interpreter enabled Apr 30 01:18:25.909377 kernel: ACPI: Using GIC for interrupt routing Apr 30 01:18:25.909385 kernel: ACPI: MCFG table detected, 1 entries Apr 30 01:18:25.909392 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 01:18:25.909399 kernel: printk: console [ttyAMA0] enabled Apr 30 01:18:25.909406 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 01:18:25.909559 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 01:18:25.909635 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 01:18:25.909702 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 01:18:25.910904 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 01:18:25.911004 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 01:18:25.911016 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 01:18:25.911024 kernel: PCI host bridge to bus 0000:00 Apr 30 01:18:25.911102 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 01:18:25.911172 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 01:18:25.911232 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 01:18:25.911291 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 01:18:25.911373 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 01:18:25.911454 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 01:18:25.911522 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 01:18:25.911593 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 01:18:25.911668 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.913844 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 01:18:25.913980 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.914068 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 01:18:25.914159 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.914247 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 01:18:25.914333 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.914403 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 01:18:25.914477 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.914543 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 01:18:25.914616 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.914685 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 01:18:25.914802 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.914873 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 01:18:25.914955 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.915028 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 01:18:25.915102 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:25.915175 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 01:18:25.915256 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 01:18:25.915328 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 01:18:25.915406 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 01:18:25.915478 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 01:18:25.915550 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 01:18:25.915695 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 01:18:25.918906 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 01:18:25.919011 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 01:18:25.919095 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 01:18:25.919169 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 01:18:25.919243 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 01:18:25.919321 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 01:18:25.919399 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 01:18:25.919475 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 01:18:25.919547 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 01:18:25.919617 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 01:18:25.919695 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 01:18:25.920844 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 01:18:25.920933 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 01:18:25.921036 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 01:18:25.921131 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 01:18:25.921204 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 01:18:25.921276 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 01:18:25.921366 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 01:18:25.921435 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 01:18:25.921507 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 01:18:25.921585 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 01:18:25.921654 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 01:18:25.921736 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 01:18:25.921811 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 01:18:25.921879 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 01:18:25.921957 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 01:18:25.922040 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 01:18:25.922120 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 01:18:25.922188 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 01:18:25.922260 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 01:18:25.922327 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 01:18:25.922394 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 01:18:25.922465 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 01:18:25.922532 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 01:18:25.922602 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 01:18:25.922674 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 01:18:25.924206 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 01:18:25.924289 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 01:18:25.924360 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 01:18:25.924427 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 01:18:25.924491 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 01:18:25.924566 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 01:18:25.924632 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 01:18:25.924696 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 01:18:25.924854 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 01:18:25.924926 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 01:18:25.925013 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 01:18:25.925084 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 01:18:25.925162 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 01:18:25.925228 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 01:18:25.925297 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 01:18:25.925363 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 01:18:25.925432 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 01:18:25.925499 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 01:18:25.925567 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 01:18:25.925636 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 01:18:25.925702 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 01:18:25.925855 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 01:18:25.925924 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 01:18:25.926031 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 01:18:25.926103 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 01:18:25.926170 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 01:18:25.926249 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 01:18:25.926314 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 01:18:25.926382 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 01:18:25.926447 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 01:18:25.926513 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 01:18:25.926580 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 01:18:25.926648 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 01:18:25.926793 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 01:18:25.926876 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 01:18:25.926953 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 01:18:25.927023 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 01:18:25.927089 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 01:18:25.927156 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 01:18:25.927221 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 01:18:25.927287 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 01:18:25.927351 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 01:18:25.927419 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 01:18:25.927485 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 01:18:25.927551 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 01:18:25.927616 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 01:18:25.927688 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 01:18:25.927777 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 01:18:25.927848 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 01:18:25.927920 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 01:18:25.927998 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 01:18:25.928066 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 01:18:25.928132 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 01:18:25.928199 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 01:18:25.928276 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 01:18:25.928348 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 01:18:25.928413 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 01:18:25.928479 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 01:18:25.928546 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 01:18:25.928620 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 01:18:25.928688 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 01:18:25.928770 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 01:18:25.928842 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 01:18:25.928910 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 01:18:25.929012 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 01:18:25.929093 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 01:18:25.929162 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 01:18:25.929229 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 01:18:25.929293 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 01:18:25.929358 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 01:18:25.929436 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 01:18:25.929505 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 01:18:25.929571 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 01:18:25.929637 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 01:18:25.929702 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 01:18:25.929797 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 01:18:25.929871 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 01:18:25.929940 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 01:18:25.930027 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 01:18:25.930095 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 01:18:25.930162 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 01:18:25.930228 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 01:18:25.930302 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 01:18:25.930371 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 01:18:25.930439 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 01:18:25.930508 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 01:18:25.930575 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 01:18:25.930642 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 01:18:25.931776 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 01:18:25.931901 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 01:18:25.931993 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 01:18:25.932063 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 01:18:25.932129 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 01:18:25.932203 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 01:18:25.932271 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 01:18:25.932338 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 01:18:25.932406 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 01:18:25.932476 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 01:18:25.932536 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 01:18:25.932597 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 01:18:25.932688 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 01:18:25.933833 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 01:18:25.933912 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 01:18:25.934012 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 01:18:25.934080 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 01:18:25.934165 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 01:18:25.934236 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 01:18:25.934309 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 01:18:25.934374 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 01:18:25.934453 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 01:18:25.934518 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 01:18:25.934578 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 01:18:25.934648 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 01:18:25.935836 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 01:18:25.936005 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 01:18:25.936089 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 01:18:25.936152 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 01:18:25.936221 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 01:18:25.936294 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 01:18:25.936355 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 01:18:25.936415 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 01:18:25.936486 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 01:18:25.936547 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 01:18:25.936620 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 01:18:25.936696 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 01:18:25.936815 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 01:18:25.936884 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 01:18:25.936895 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 01:18:25.936903 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 01:18:25.936911 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 01:18:25.936921 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 01:18:25.936931 kernel: iommu: Default domain type: Translated Apr 30 01:18:25.936939 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 01:18:25.936963 kernel: efivars: Registered efivars operations Apr 30 01:18:25.936971 kernel: vgaarb: loaded Apr 30 01:18:25.936979 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 01:18:25.936987 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 01:18:25.936995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 01:18:25.937005 kernel: pnp: PnP ACPI init Apr 30 01:18:25.937100 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 01:18:25.937112 kernel: pnp: PnP ACPI: found 1 devices Apr 30 01:18:25.937122 kernel: NET: Registered PF_INET protocol family Apr 30 01:18:25.937130 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 01:18:25.937138 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 01:18:25.937146 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 01:18:25.937154 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 01:18:25.937162 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 01:18:25.937169 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 01:18:25.937177 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 01:18:25.937185 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 01:18:25.937197 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 01:18:25.937275 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 01:18:25.937286 kernel: PCI: CLS 0 bytes, default 64 Apr 30 01:18:25.937294 kernel: kvm [1]: HYP mode not available Apr 30 01:18:25.937302 kernel: Initialise system trusted keyrings Apr 30 01:18:25.937310 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 01:18:25.937318 kernel: Key type asymmetric registered Apr 30 01:18:25.937325 kernel: Asymmetric key parser 'x509' registered Apr 30 01:18:25.937333 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 01:18:25.937348 kernel: io scheduler mq-deadline registered Apr 30 01:18:25.937357 kernel: io scheduler kyber registered Apr 30 01:18:25.937366 kernel: io scheduler bfq registered Apr 30 01:18:25.937375 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 01:18:25.937446 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 01:18:25.937515 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 01:18:25.937583 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.937656 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 01:18:25.939513 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 01:18:25.939608 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.939680 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 01:18:25.939786 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 01:18:25.939856 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.939930 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 01:18:25.940029 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 01:18:25.940099 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.940171 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 01:18:25.940239 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 01:18:25.940306 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.940377 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 01:18:25.940448 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 01:18:25.940514 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.940586 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 01:18:25.940653 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 01:18:25.940797 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.940881 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 01:18:25.940998 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 01:18:25.941088 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.941100 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 01:18:25.941176 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 01:18:25.941255 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 01:18:25.941325 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:25.941340 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 01:18:25.941348 kernel: ACPI: button: Power Button [PWRB] Apr 30 01:18:25.941356 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 01:18:25.941431 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 01:18:25.941507 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 01:18:25.941518 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 01:18:25.941526 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 01:18:25.941593 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 01:18:25.941608 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 01:18:25.941617 kernel: thunder_xcv, ver 1.0 Apr 30 01:18:25.941625 kernel: thunder_bgx, ver 1.0 Apr 30 01:18:25.941633 kernel: nicpf, ver 1.0 Apr 30 01:18:25.941640 kernel: nicvf, ver 1.0 Apr 30 01:18:25.941742 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 01:18:25.941815 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T01:18:25 UTC (1745975905) Apr 30 01:18:25.941825 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 01:18:25.941837 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 01:18:25.941845 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 01:18:25.941853 kernel: watchdog: Hard watchdog permanently disabled Apr 30 01:18:25.941861 kernel: NET: Registered PF_INET6 protocol family Apr 30 01:18:25.941869 kernel: Segment Routing with IPv6 Apr 30 01:18:25.941877 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 01:18:25.941885 kernel: NET: Registered PF_PACKET protocol family Apr 30 01:18:25.941895 kernel: Key type dns_resolver registered Apr 30 01:18:25.941905 kernel: registered taskstats version 1 Apr 30 01:18:25.941913 kernel: Loading compiled-in X.509 certificates Apr 30 01:18:25.941925 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 01:18:25.941933 kernel: Key type .fscrypt registered Apr 30 01:18:25.941952 kernel: Key type fscrypt-provisioning registered Apr 30 01:18:25.941964 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 01:18:25.941971 kernel: ima: Allocated hash algorithm: sha1 Apr 30 01:18:25.941979 kernel: ima: No architecture policies found Apr 30 01:18:25.941987 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 01:18:25.941995 kernel: clk: Disabling unused clocks Apr 30 01:18:25.942005 kernel: Freeing unused kernel memory: 39424K Apr 30 01:18:25.942013 kernel: Run /init as init process Apr 30 01:18:25.942022 kernel: with arguments: Apr 30 01:18:25.942030 kernel: /init Apr 30 01:18:25.942038 kernel: with environment: Apr 30 01:18:25.942046 kernel: HOME=/ Apr 30 01:18:25.942053 kernel: TERM=linux Apr 30 01:18:25.942061 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 01:18:25.942072 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 01:18:25.942087 systemd[1]: Detected virtualization kvm. Apr 30 01:18:25.942097 systemd[1]: Detected architecture arm64. Apr 30 01:18:25.942107 systemd[1]: Running in initrd. Apr 30 01:18:25.942117 systemd[1]: No hostname configured, using default hostname. Apr 30 01:18:25.942125 systemd[1]: Hostname set to . Apr 30 01:18:25.942133 systemd[1]: Initializing machine ID from VM UUID. Apr 30 01:18:25.942142 systemd[1]: Queued start job for default target initrd.target. Apr 30 01:18:25.942152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:18:25.942160 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:18:25.942172 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 01:18:25.942180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 01:18:25.942189 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 01:18:25.942198 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 01:18:25.942208 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 01:18:25.942218 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 01:18:25.942227 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:18:25.942235 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:18:25.942245 systemd[1]: Reached target paths.target - Path Units. Apr 30 01:18:25.942253 systemd[1]: Reached target slices.target - Slice Units. Apr 30 01:18:25.942261 systemd[1]: Reached target swap.target - Swaps. Apr 30 01:18:25.942270 systemd[1]: Reached target timers.target - Timer Units. Apr 30 01:18:25.942281 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 01:18:25.942292 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 01:18:25.942303 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 01:18:25.942312 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 01:18:25.942322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:18:25.942332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 01:18:25.942342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:18:25.942351 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 01:18:25.942361 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 01:18:25.942370 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 01:18:25.942381 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 01:18:25.942391 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 01:18:25.942403 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 01:18:25.942412 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 01:18:25.942420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:25.942430 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 01:18:25.942463 systemd-journald[236]: Collecting audit messages is disabled. Apr 30 01:18:25.942486 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:18:25.942495 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 01:18:25.942504 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 01:18:25.942513 kernel: Bridge firewalling registered Apr 30 01:18:25.942522 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 01:18:25.942531 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 01:18:25.942539 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 01:18:25.942548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:25.942558 systemd-journald[236]: Journal started Apr 30 01:18:25.942579 systemd-journald[236]: Runtime Journal (/run/log/journal/a6459c0efd1049bdb5e32947f2679d39) is 8.0M, max 76.6M, 68.6M free. Apr 30 01:18:25.944388 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:18:25.902221 systemd-modules-load[237]: Inserted module 'overlay' Apr 30 01:18:25.918681 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 30 01:18:25.949750 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 01:18:25.950924 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 01:18:25.962004 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 01:18:25.964485 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 01:18:25.969770 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:18:25.974096 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:18:25.978015 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:18:25.984001 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 01:18:25.996751 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:18:26.004633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 01:18:26.007454 dracut-cmdline[271]: dracut-dracut-053 Apr 30 01:18:26.011187 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 01:18:26.036125 systemd-resolved[278]: Positive Trust Anchors: Apr 30 01:18:26.036144 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 01:18:26.036176 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 01:18:26.043839 systemd-resolved[278]: Defaulting to hostname 'linux'. Apr 30 01:18:26.045670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 01:18:26.046917 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:18:26.128789 kernel: SCSI subsystem initialized Apr 30 01:18:26.133763 kernel: Loading iSCSI transport class v2.0-870. Apr 30 01:18:26.141754 kernel: iscsi: registered transport (tcp) Apr 30 01:18:26.154794 kernel: iscsi: registered transport (qla4xxx) Apr 30 01:18:26.154883 kernel: QLogic iSCSI HBA Driver Apr 30 01:18:26.198320 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 01:18:26.207072 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 01:18:26.226878 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 01:18:26.226973 kernel: device-mapper: uevent: version 1.0.3 Apr 30 01:18:26.227870 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 01:18:26.277759 kernel: raid6: neonx8 gen() 15704 MB/s Apr 30 01:18:26.294840 kernel: raid6: neonx4 gen() 14542 MB/s Apr 30 01:18:26.311760 kernel: raid6: neonx2 gen() 13189 MB/s Apr 30 01:18:26.328767 kernel: raid6: neonx1 gen() 10450 MB/s Apr 30 01:18:26.345774 kernel: raid6: int64x8 gen() 6900 MB/s Apr 30 01:18:26.362785 kernel: raid6: int64x4 gen() 7315 MB/s Apr 30 01:18:26.379809 kernel: raid6: int64x2 gen() 6108 MB/s Apr 30 01:18:26.396770 kernel: raid6: int64x1 gen() 5037 MB/s Apr 30 01:18:26.396848 kernel: raid6: using algorithm neonx8 gen() 15704 MB/s Apr 30 01:18:26.413794 kernel: raid6: .... xor() 11873 MB/s, rmw enabled Apr 30 01:18:26.413898 kernel: raid6: using neon recovery algorithm Apr 30 01:18:26.418742 kernel: xor: measuring software checksum speed Apr 30 01:18:26.418785 kernel: 8regs : 17337 MB/sec Apr 30 01:18:26.418802 kernel: 32regs : 19664 MB/sec Apr 30 01:18:26.419740 kernel: arm64_neon : 24462 MB/sec Apr 30 01:18:26.419790 kernel: xor: using function: arm64_neon (24462 MB/sec) Apr 30 01:18:26.470803 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 01:18:26.484602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 01:18:26.491109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:18:26.517763 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 30 01:18:26.521860 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:18:26.531899 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 01:18:26.546311 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Apr 30 01:18:26.583502 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 01:18:26.587992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 01:18:26.650133 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:18:26.660913 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 01:18:26.676280 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 01:18:26.676972 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 01:18:26.678682 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:18:26.679285 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 01:18:26.688227 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 01:18:26.707473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 01:18:26.750738 kernel: ACPI: bus type USB registered Apr 30 01:18:26.755797 kernel: usbcore: registered new interface driver usbfs Apr 30 01:18:26.755889 kernel: usbcore: registered new interface driver hub Apr 30 01:18:26.756144 kernel: usbcore: registered new device driver usb Apr 30 01:18:26.758903 kernel: scsi host0: Virtio SCSI HBA Apr 30 01:18:26.775970 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 01:18:26.776090 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 01:18:26.785927 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 01:18:26.787519 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:18:26.790540 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:18:26.792159 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 01:18:26.792331 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:26.796363 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:26.808038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:26.813768 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 01:18:26.829885 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 01:18:26.830021 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 01:18:26.830105 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 01:18:26.830188 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 01:18:26.830268 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 01:18:26.830350 kernel: hub 1-0:1.0: USB hub found Apr 30 01:18:26.830456 kernel: hub 1-0:1.0: 4 ports detected Apr 30 01:18:26.830539 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 01:18:26.830634 kernel: hub 2-0:1.0: USB hub found Apr 30 01:18:26.831163 kernel: hub 2-0:1.0: 4 ports detected Apr 30 01:18:26.831817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:26.844926 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 01:18:26.849727 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 01:18:26.849882 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 01:18:26.849894 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 01:18:26.848951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:18:26.861669 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 01:18:26.868652 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 01:18:26.868855 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 01:18:26.868999 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 01:18:26.869107 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 01:18:26.869196 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 01:18:26.869207 kernel: GPT:17805311 != 80003071 Apr 30 01:18:26.869215 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 01:18:26.869225 kernel: GPT:17805311 != 80003071 Apr 30 01:18:26.869234 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 01:18:26.869243 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:26.869256 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 01:18:26.879836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:18:26.926728 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (506) Apr 30 01:18:26.926817 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (531) Apr 30 01:18:26.933125 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 01:18:26.941191 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 01:18:26.950325 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 01:18:26.952909 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 01:18:26.959279 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 01:18:26.981177 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 01:18:26.990329 disk-uuid[577]: Primary Header is updated. Apr 30 01:18:26.990329 disk-uuid[577]: Secondary Entries is updated. Apr 30 01:18:26.990329 disk-uuid[577]: Secondary Header is updated. Apr 30 01:18:26.996798 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:27.003763 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:27.066806 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 01:18:27.308832 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 01:18:27.447158 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 01:18:27.447266 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 01:18:27.447679 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 01:18:27.503292 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 01:18:27.503668 kernel: usbcore: registered new interface driver usbhid Apr 30 01:18:27.503694 kernel: usbhid: USB HID core driver Apr 30 01:18:28.007762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:28.011864 disk-uuid[578]: The operation has completed successfully. Apr 30 01:18:28.062287 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 01:18:28.063758 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 01:18:28.077963 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 01:18:28.084760 sh[592]: Success Apr 30 01:18:28.102152 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 01:18:28.168902 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 01:18:28.170304 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 01:18:28.172614 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 01:18:28.201993 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 01:18:28.202068 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 01:18:28.202083 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 01:18:28.202096 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 01:18:28.202801 kernel: BTRFS info (device dm-0): using free space tree Apr 30 01:18:28.208790 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 01:18:28.210591 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 01:18:28.211923 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 01:18:28.217160 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 01:18:28.220008 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 01:18:28.230356 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:28.230408 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 01:18:28.230420 kernel: BTRFS info (device sda6): using free space tree Apr 30 01:18:28.234748 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 01:18:28.234815 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 01:18:28.246817 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 01:18:28.248325 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:28.257745 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 01:18:28.266992 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 01:18:28.360145 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 01:18:28.369958 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 01:18:28.370602 ignition[682]: Ignition 2.19.0 Apr 30 01:18:28.370609 ignition[682]: Stage: fetch-offline Apr 30 01:18:28.373441 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 01:18:28.370651 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:28.370660 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:28.370862 ignition[682]: parsed url from cmdline: "" Apr 30 01:18:28.370866 ignition[682]: no config URL provided Apr 30 01:18:28.370871 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 01:18:28.370880 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 30 01:18:28.370885 ignition[682]: failed to fetch config: resource requires networking Apr 30 01:18:28.371107 ignition[682]: Ignition finished successfully Apr 30 01:18:28.393504 systemd-networkd[778]: lo: Link UP Apr 30 01:18:28.393516 systemd-networkd[778]: lo: Gained carrier Apr 30 01:18:28.395255 systemd-networkd[778]: Enumeration completed Apr 30 01:18:28.395634 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 01:18:28.396852 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:28.396855 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:18:28.397611 systemd[1]: Reached target network.target - Network. Apr 30 01:18:28.398893 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:28.398896 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:18:28.399488 systemd-networkd[778]: eth0: Link UP Apr 30 01:18:28.399492 systemd-networkd[778]: eth0: Gained carrier Apr 30 01:18:28.399500 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:28.407458 systemd-networkd[778]: eth1: Link UP Apr 30 01:18:28.407470 systemd-networkd[778]: eth1: Gained carrier Apr 30 01:18:28.407482 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:28.409074 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 01:18:28.424760 ignition[781]: Ignition 2.19.0 Apr 30 01:18:28.424771 ignition[781]: Stage: fetch Apr 30 01:18:28.425004 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:28.425017 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:28.425128 ignition[781]: parsed url from cmdline: "" Apr 30 01:18:28.425132 ignition[781]: no config URL provided Apr 30 01:18:28.425137 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 01:18:28.425146 ignition[781]: no config at "/usr/lib/ignition/user.ign" Apr 30 01:18:28.425167 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 01:18:28.426016 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 01:18:28.438838 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 01:18:28.460814 systemd-networkd[778]: eth0: DHCPv4 address 168.119.50.83/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 01:18:28.626265 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 01:18:28.633453 ignition[781]: GET result: OK Apr 30 01:18:28.633626 ignition[781]: parsing config with SHA512: 134b981693823c658adb08569145d0eafef4a84cc5047f8575ea4bcef45e202ebabe96388d4ced20899528ca378a0f8a4e71e0e8ca1aee071b49dbdb034ce9ac Apr 30 01:18:28.640539 unknown[781]: fetched base config from "system" Apr 30 01:18:28.640549 unknown[781]: fetched base config from "system" Apr 30 01:18:28.640975 ignition[781]: fetch: fetch complete Apr 30 01:18:28.640554 unknown[781]: fetched user config from "hetzner" Apr 30 01:18:28.640981 ignition[781]: fetch: fetch passed Apr 30 01:18:28.645078 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 01:18:28.641029 ignition[781]: Ignition finished successfully Apr 30 01:18:28.650923 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 01:18:28.665550 ignition[788]: Ignition 2.19.0 Apr 30 01:18:28.665567 ignition[788]: Stage: kargs Apr 30 01:18:28.665780 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:28.665789 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:28.666886 ignition[788]: kargs: kargs passed Apr 30 01:18:28.666954 ignition[788]: Ignition finished successfully Apr 30 01:18:28.670281 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 01:18:28.683081 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 01:18:28.697510 ignition[795]: Ignition 2.19.0 Apr 30 01:18:28.697520 ignition[795]: Stage: disks Apr 30 01:18:28.697691 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:28.697701 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:28.698725 ignition[795]: disks: disks passed Apr 30 01:18:28.698777 ignition[795]: Ignition finished successfully Apr 30 01:18:28.700735 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 01:18:28.701778 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 01:18:28.702583 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 01:18:28.703659 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 01:18:28.704660 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 01:18:28.705590 systemd[1]: Reached target basic.target - Basic System. Apr 30 01:18:28.711019 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 01:18:28.731398 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 01:18:28.735525 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 01:18:28.744804 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 01:18:28.795746 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 01:18:28.796265 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 01:18:28.797851 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 01:18:28.808007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 01:18:28.811813 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 01:18:28.816005 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 01:18:28.816671 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 01:18:28.816737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 01:18:28.826953 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Apr 30 01:18:28.827004 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:28.827015 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 01:18:28.827821 kernel: BTRFS info (device sda6): using free space tree Apr 30 01:18:28.832781 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 01:18:28.836730 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 01:18:28.836787 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 01:18:28.839905 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 01:18:28.843013 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 01:18:28.898772 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 01:18:28.901650 coreos-metadata[814]: Apr 30 01:18:28.901 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 01:18:28.903304 coreos-metadata[814]: Apr 30 01:18:28.903 INFO Fetch successful Apr 30 01:18:28.903304 coreos-metadata[814]: Apr 30 01:18:28.903 INFO wrote hostname ci-4081-3-3-a-62378e86a2 to /sysroot/etc/hostname Apr 30 01:18:28.907768 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Apr 30 01:18:28.907117 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 01:18:28.914194 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 01:18:28.919639 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 01:18:29.020100 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 01:18:29.024870 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 01:18:29.026807 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 01:18:29.037755 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:29.061063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 01:18:29.063418 ignition[929]: INFO : Ignition 2.19.0 Apr 30 01:18:29.063418 ignition[929]: INFO : Stage: mount Apr 30 01:18:29.063418 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:29.063418 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:29.066339 ignition[929]: INFO : mount: mount passed Apr 30 01:18:29.066339 ignition[929]: INFO : Ignition finished successfully Apr 30 01:18:29.066047 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 01:18:29.070075 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 01:18:29.201840 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 01:18:29.211062 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 01:18:29.221766 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Apr 30 01:18:29.223726 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:29.223780 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 01:18:29.223797 kernel: BTRFS info (device sda6): using free space tree Apr 30 01:18:29.226746 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 01:18:29.226790 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 01:18:29.229301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 01:18:29.262547 ignition[958]: INFO : Ignition 2.19.0 Apr 30 01:18:29.262547 ignition[958]: INFO : Stage: files Apr 30 01:18:29.264220 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:29.264220 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:29.264220 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 30 01:18:29.266776 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 01:18:29.266776 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 01:18:29.268886 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 01:18:29.268886 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 01:18:29.270536 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 01:18:29.269200 unknown[958]: wrote ssh authorized keys file for user: core Apr 30 01:18:29.272234 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 01:18:29.272234 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 01:18:29.358358 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 01:18:29.595401 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 01:18:29.595401 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 01:18:29.599836 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 01:18:29.702972 systemd-networkd[778]: eth0: Gained IPv6LL Apr 30 01:18:29.767025 systemd-networkd[778]: eth1: Gained IPv6LL Apr 30 01:18:30.190475 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 01:18:30.427319 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 01:18:30.427319 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 01:18:30.430331 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 01:18:30.431762 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 01:18:30.431762 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 01:18:30.431762 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 01:18:30.431762 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 01:18:30.431762 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 01:18:30.431762 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 01:18:30.431762 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 30 01:18:30.431762 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 01:18:30.431762 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 01:18:30.431762 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 01:18:30.431762 ignition[958]: INFO : files: files passed Apr 30 01:18:30.431762 ignition[958]: INFO : Ignition finished successfully Apr 30 01:18:30.434827 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 01:18:30.441883 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 01:18:30.444879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 01:18:30.448563 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 01:18:30.448696 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 01:18:30.473001 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:18:30.473001 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:18:30.475886 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:18:30.477196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 01:18:30.479229 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 01:18:30.487084 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 01:18:30.526870 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 01:18:30.527813 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 01:18:30.529199 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 01:18:30.530053 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 01:18:30.531646 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 01:18:30.532888 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 01:18:30.558775 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 01:18:30.565944 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 01:18:30.577391 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:18:30.578197 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:18:30.579404 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 01:18:30.580664 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 01:18:30.580821 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 01:18:30.582307 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 01:18:30.582969 systemd[1]: Stopped target basic.target - Basic System. Apr 30 01:18:30.584049 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 01:18:30.585033 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 01:18:30.586046 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 01:18:30.587105 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 01:18:30.588343 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 01:18:30.589504 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 01:18:30.590479 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 01:18:30.591620 systemd[1]: Stopped target swap.target - Swaps. Apr 30 01:18:30.592513 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 01:18:30.592653 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 01:18:30.593879 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:18:30.594522 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:18:30.595551 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 01:18:30.598789 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:18:30.600092 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 01:18:30.600311 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 01:18:30.602475 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 01:18:30.602692 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 01:18:30.604818 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 01:18:30.605067 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 01:18:30.606412 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 01:18:30.606620 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 01:18:30.616334 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 01:18:30.617405 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 01:18:30.619828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:18:30.623980 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 01:18:30.624917 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 01:18:30.625567 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:18:30.628272 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 01:18:30.630726 ignition[1010]: INFO : Ignition 2.19.0 Apr 30 01:18:30.630726 ignition[1010]: INFO : Stage: umount Apr 30 01:18:30.630726 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:30.630726 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:30.634843 ignition[1010]: INFO : umount: umount passed Apr 30 01:18:30.634843 ignition[1010]: INFO : Ignition finished successfully Apr 30 01:18:30.631937 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 01:18:30.636542 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 01:18:30.636653 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 01:18:30.640364 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 01:18:30.640473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 01:18:30.643570 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 01:18:30.643639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 01:18:30.646636 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 01:18:30.646689 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 01:18:30.650010 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 01:18:30.650083 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 01:18:30.653604 systemd[1]: Stopped target network.target - Network. Apr 30 01:18:30.660798 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 01:18:30.660891 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 01:18:30.662825 systemd[1]: Stopped target paths.target - Path Units. Apr 30 01:18:30.663812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 01:18:30.667781 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:18:30.675814 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 01:18:30.677273 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 01:18:30.679724 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 01:18:30.680470 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 01:18:30.683429 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 01:18:30.684002 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 01:18:30.685565 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 01:18:30.685638 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 01:18:30.687633 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 01:18:30.687845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 01:18:30.688798 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 01:18:30.689796 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 01:18:30.691818 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 01:18:30.692363 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 01:18:30.692456 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 01:18:30.693810 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 01:18:30.693904 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 01:18:30.700834 systemd-networkd[778]: eth0: DHCPv6 lease lost Apr 30 01:18:30.704031 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 01:18:30.704186 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 01:18:30.706802 systemd-networkd[778]: eth1: DHCPv6 lease lost Apr 30 01:18:30.707437 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 01:18:30.707501 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:18:30.710559 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 01:18:30.711264 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 01:18:30.712477 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 01:18:30.712542 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:18:30.718943 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 01:18:30.719402 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 01:18:30.719470 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 01:18:30.721698 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 01:18:30.721773 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:18:30.724370 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 01:18:30.724440 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 01:18:30.726224 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:18:30.739243 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 01:18:30.739456 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 01:18:30.746023 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 01:18:30.746355 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:18:30.748317 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 01:18:30.748362 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 01:18:30.750035 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 01:18:30.750066 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:18:30.751593 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 01:18:30.751639 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 01:18:30.753035 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 01:18:30.753081 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 01:18:30.754380 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 01:18:30.754429 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:18:30.768598 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 01:18:30.769859 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 01:18:30.770039 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:18:30.771514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 01:18:30.771589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:30.778599 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 01:18:30.778737 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 01:18:30.780399 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 01:18:30.784956 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 01:18:30.795032 systemd[1]: Switching root. Apr 30 01:18:30.829816 systemd-journald[236]: Journal stopped Apr 30 01:18:31.737064 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Apr 30 01:18:31.737133 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 01:18:31.737145 kernel: SELinux: policy capability open_perms=1 Apr 30 01:18:31.737155 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 01:18:31.737164 kernel: SELinux: policy capability always_check_network=0 Apr 30 01:18:31.737174 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 01:18:31.737184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 01:18:31.737198 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 01:18:31.737211 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 01:18:31.737221 kernel: audit: type=1403 audit(1745975910.957:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 01:18:31.737231 systemd[1]: Successfully loaded SELinux policy in 36.739ms. Apr 30 01:18:31.737255 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.709ms. Apr 30 01:18:31.737267 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 01:18:31.737278 systemd[1]: Detected virtualization kvm. Apr 30 01:18:31.737289 systemd[1]: Detected architecture arm64. Apr 30 01:18:31.737300 systemd[1]: Detected first boot. Apr 30 01:18:31.737311 systemd[1]: Hostname set to . Apr 30 01:18:31.737327 systemd[1]: Initializing machine ID from VM UUID. Apr 30 01:18:31.737337 zram_generator::config[1052]: No configuration found. Apr 30 01:18:31.737351 systemd[1]: Populated /etc with preset unit settings. Apr 30 01:18:31.737362 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 01:18:31.737372 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 01:18:31.737382 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 01:18:31.737393 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 01:18:31.737404 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 01:18:31.737416 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 01:18:31.737430 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 01:18:31.737440 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 01:18:31.737451 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 01:18:31.737461 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 01:18:31.737472 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 01:18:31.737482 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:18:31.737494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:18:31.737504 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 01:18:31.737516 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 01:18:31.737528 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 01:18:31.737538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 01:18:31.737549 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 01:18:31.737561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:18:31.737572 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 01:18:31.737585 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 01:18:31.737600 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 01:18:31.737610 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 01:18:31.737621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:18:31.737632 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 01:18:31.737642 systemd[1]: Reached target slices.target - Slice Units. Apr 30 01:18:31.737657 systemd[1]: Reached target swap.target - Swaps. Apr 30 01:18:31.737667 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 01:18:31.737678 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 01:18:31.737690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:18:31.737701 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 01:18:31.739855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:18:31.739879 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 01:18:31.739890 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 01:18:31.739901 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 01:18:31.739912 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 01:18:31.739936 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 01:18:31.739948 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 01:18:31.739966 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 01:18:31.739977 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 01:18:31.739988 systemd[1]: Reached target machines.target - Containers. Apr 30 01:18:31.739998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 01:18:31.740008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:31.740022 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 01:18:31.740035 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 01:18:31.740046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:18:31.740056 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 01:18:31.740067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 01:18:31.740077 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 01:18:31.740088 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 01:18:31.740099 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 01:18:31.740111 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 01:18:31.740122 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 01:18:31.740133 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 01:18:31.740143 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 01:18:31.740153 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 01:18:31.740164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 01:18:31.740174 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 01:18:31.740185 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 01:18:31.740195 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 01:18:31.740208 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 01:18:31.740218 systemd[1]: Stopped verity-setup.service. Apr 30 01:18:31.740229 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 01:18:31.740241 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 01:18:31.740251 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 01:18:31.740265 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 01:18:31.740276 kernel: fuse: init (API version 7.39) Apr 30 01:18:31.740287 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 01:18:31.740297 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 01:18:31.740308 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:18:31.740318 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 01:18:31.740329 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 01:18:31.740339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:18:31.740350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:18:31.740362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 01:18:31.740372 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 01:18:31.740383 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 01:18:31.740393 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 01:18:31.740405 kernel: loop: module loaded Apr 30 01:18:31.740416 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 01:18:31.740427 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 01:18:31.740437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 01:18:31.740448 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 01:18:31.740499 systemd-journald[1114]: Collecting audit messages is disabled. Apr 30 01:18:31.740526 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 01:18:31.740537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 01:18:31.740550 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 01:18:31.740565 systemd-journald[1114]: Journal started Apr 30 01:18:31.740590 systemd-journald[1114]: Runtime Journal (/run/log/journal/a6459c0efd1049bdb5e32947f2679d39) is 8.0M, max 76.6M, 68.6M free. Apr 30 01:18:31.452617 systemd[1]: Queued start job for default target multi-user.target. Apr 30 01:18:31.747907 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 01:18:31.472840 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 01:18:31.473324 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 01:18:31.744215 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 01:18:31.745124 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 01:18:31.745965 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 01:18:31.747341 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 01:18:31.753639 kernel: ACPI: bus type drm_connector registered Apr 30 01:18:31.760287 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 01:18:31.761855 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 01:18:31.778780 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 01:18:31.782256 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 01:18:31.782291 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 01:18:31.784167 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 01:18:31.793157 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 01:18:31.805012 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 01:18:31.806981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:31.812948 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 01:18:31.815013 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 01:18:31.816096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 01:18:31.819287 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 01:18:31.822980 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 01:18:31.826965 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 01:18:31.828033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:18:31.829211 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 01:18:31.840590 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 01:18:31.845435 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:18:31.862009 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 01:18:31.873380 systemd-journald[1114]: Time spent on flushing to /var/log/journal/a6459c0efd1049bdb5e32947f2679d39 is 36.618ms for 1129 entries. Apr 30 01:18:31.873380 systemd-journald[1114]: System Journal (/var/log/journal/a6459c0efd1049bdb5e32947f2679d39) is 8.0M, max 584.8M, 576.8M free. Apr 30 01:18:31.925215 systemd-journald[1114]: Received client request to flush runtime journal. Apr 30 01:18:31.925271 kernel: loop0: detected capacity change from 0 to 114328 Apr 30 01:18:31.925294 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 01:18:31.879790 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 01:18:31.880657 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 01:18:31.890055 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 01:18:31.914771 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 01:18:31.930839 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 01:18:31.939131 kernel: loop1: detected capacity change from 0 to 8 Apr 30 01:18:31.942495 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 01:18:31.948845 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 01:18:31.958303 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 01:18:31.959748 kernel: loop2: detected capacity change from 0 to 194096 Apr 30 01:18:31.972095 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 01:18:32.012103 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 30 01:18:32.012123 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 30 01:18:32.018822 kernel: loop3: detected capacity change from 0 to 114432 Apr 30 01:18:32.022125 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:18:32.062156 kernel: loop4: detected capacity change from 0 to 114328 Apr 30 01:18:32.084740 kernel: loop5: detected capacity change from 0 to 8 Apr 30 01:18:32.087758 kernel: loop6: detected capacity change from 0 to 194096 Apr 30 01:18:32.113748 kernel: loop7: detected capacity change from 0 to 114432 Apr 30 01:18:32.133305 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 01:18:32.134651 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 30 01:18:32.143102 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 01:18:32.143127 systemd[1]: Reloading... Apr 30 01:18:32.258731 zram_generator::config[1220]: No configuration found. Apr 30 01:18:32.275902 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 01:18:32.414495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:18:32.462493 systemd[1]: Reloading finished in 318 ms. Apr 30 01:18:32.508593 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 01:18:32.512145 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 01:18:32.524266 systemd[1]: Starting ensure-sysext.service... Apr 30 01:18:32.531964 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 01:18:32.541102 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Apr 30 01:18:32.541126 systemd[1]: Reloading... Apr 30 01:18:32.563221 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 01:18:32.563931 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 01:18:32.565851 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 01:18:32.566115 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 30 01:18:32.566161 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 30 01:18:32.569472 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 01:18:32.569615 systemd-tmpfiles[1258]: Skipping /boot Apr 30 01:18:32.578637 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 01:18:32.578812 systemd-tmpfiles[1258]: Skipping /boot Apr 30 01:18:32.620763 zram_generator::config[1285]: No configuration found. Apr 30 01:18:32.717010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:18:32.764905 systemd[1]: Reloading finished in 223 ms. Apr 30 01:18:32.783272 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 01:18:32.785450 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:18:32.813068 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 01:18:32.817953 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 01:18:32.822974 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 01:18:32.833377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 01:18:32.839671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:18:32.847985 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 01:18:32.862656 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 01:18:32.868071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:32.877996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:18:32.886002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 01:18:32.891763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 01:18:32.893904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:32.900176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:32.900463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:32.904537 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 01:18:32.904985 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Apr 30 01:18:32.906427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:18:32.906582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:18:32.916404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:32.929117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:18:32.936587 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 01:18:32.937405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:32.938257 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 01:18:32.944036 systemd[1]: Finished ensure-sysext.service. Apr 30 01:18:32.946034 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:18:32.948580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:18:32.949964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:18:32.952216 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 01:18:32.972060 augenrules[1355]: No rules Apr 30 01:18:32.984114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 01:18:32.989144 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 01:18:32.994987 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 01:18:32.995836 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 01:18:32.997809 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 01:18:32.999081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 01:18:32.999251 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 01:18:33.006223 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 01:18:33.006703 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 01:18:33.008501 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 01:18:33.008663 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 01:18:33.017526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 01:18:33.017615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 01:18:33.024217 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 01:18:33.044173 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 01:18:33.067840 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 01:18:33.177413 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 01:18:33.179737 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 01:18:33.184601 systemd-networkd[1376]: lo: Link UP Apr 30 01:18:33.184615 systemd-networkd[1376]: lo: Gained carrier Apr 30 01:18:33.197469 systemd-networkd[1376]: Enumeration completed Apr 30 01:18:33.197584 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 01:18:33.197785 systemd-timesyncd[1381]: No network connectivity, watching for changes. Apr 30 01:18:33.200646 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.200657 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:18:33.204041 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.204056 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:18:33.204649 systemd-networkd[1376]: eth0: Link UP Apr 30 01:18:33.204660 systemd-networkd[1376]: eth0: Gained carrier Apr 30 01:18:33.204675 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.209504 systemd-resolved[1334]: Positive Trust Anchors: Apr 30 01:18:33.209528 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 01:18:33.209561 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 01:18:33.212071 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 01:18:33.216106 systemd-networkd[1376]: eth1: Link UP Apr 30 01:18:33.216116 systemd-networkd[1376]: eth1: Gained carrier Apr 30 01:18:33.216136 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.217203 systemd-resolved[1334]: Using system hostname 'ci-4081-3-3-a-62378e86a2'. Apr 30 01:18:33.220844 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 01:18:33.223482 systemd[1]: Reached target network.target - Network. Apr 30 01:18:33.225207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:18:33.238265 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.250428 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 01:18:33.251842 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Apr 30 01:18:33.263761 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 01:18:33.266806 systemd-networkd[1376]: eth0: DHCPv4 address 168.119.50.83/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 01:18:33.268525 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Apr 30 01:18:33.293752 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1370) Apr 30 01:18:33.341834 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 01:18:33.342004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:33.346972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:18:33.349596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 01:18:33.354504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 01:18:33.355187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:33.355228 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 01:18:33.356959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 01:18:33.364943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 01:18:33.366160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:18:33.367758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:18:33.379185 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 01:18:33.379931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 01:18:33.382011 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 01:18:33.387600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 01:18:33.387879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 01:18:33.388701 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 01:18:33.401506 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 01:18:33.419010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:33.426772 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 01:18:33.426866 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 01:18:33.426886 kernel: [drm] features: -context_init Apr 30 01:18:33.427759 kernel: [drm] number of scanouts: 1 Apr 30 01:18:33.427816 kernel: [drm] number of cap sets: 0 Apr 30 01:18:33.431635 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 01:18:33.432921 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 01:18:33.439805 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 01:18:33.448168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 01:18:33.449778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:33.456007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:33.517156 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:33.570870 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 01:18:33.583990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 01:18:33.595777 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 01:18:33.624501 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 01:18:33.626864 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:18:33.628484 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 01:18:33.629193 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 01:18:33.629996 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 01:18:33.630893 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 01:18:33.631661 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 01:18:33.632383 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 01:18:33.633082 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 01:18:33.633114 systemd[1]: Reached target paths.target - Path Units. Apr 30 01:18:33.633573 systemd[1]: Reached target timers.target - Timer Units. Apr 30 01:18:33.636376 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 01:18:33.639015 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 01:18:33.644974 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 01:18:33.647219 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 01:18:33.648415 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 01:18:33.649127 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 01:18:33.649612 systemd[1]: Reached target basic.target - Basic System. Apr 30 01:18:33.650193 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 01:18:33.650229 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 01:18:33.653948 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 01:18:33.658697 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 01:18:33.663519 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 01:18:33.667003 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 01:18:33.670811 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 01:18:33.676350 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 01:18:33.677390 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 01:18:33.681014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 01:18:33.686957 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 01:18:33.691965 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 01:18:33.696168 jq[1450]: false Apr 30 01:18:33.697955 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 01:18:33.701389 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 01:18:33.720403 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 01:18:33.723863 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 01:18:33.724473 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 01:18:33.725886 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 01:18:33.727825 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 01:18:33.732784 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 01:18:33.735469 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 01:18:33.736790 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 01:18:33.749269 coreos-metadata[1448]: Apr 30 01:18:33.748 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 01:18:33.753466 extend-filesystems[1453]: Found loop4 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found loop5 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found loop6 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found loop7 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found sda Apr 30 01:18:33.754882 extend-filesystems[1453]: Found sda1 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found sda2 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found sda3 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found usr Apr 30 01:18:33.754882 extend-filesystems[1453]: Found sda4 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found sda6 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found sda7 Apr 30 01:18:33.754882 extend-filesystems[1453]: Found sda9 Apr 30 01:18:33.754882 extend-filesystems[1453]: Checking size of /dev/sda9 Apr 30 01:18:33.755801 dbus-daemon[1449]: [system] SELinux support is enabled Apr 30 01:18:33.763860 coreos-metadata[1448]: Apr 30 01:18:33.757 INFO Fetch successful Apr 30 01:18:33.763860 coreos-metadata[1448]: Apr 30 01:18:33.758 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 01:18:33.756561 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 01:18:33.766486 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 01:18:33.766537 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 01:18:33.770021 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 01:18:33.770054 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 01:18:33.772988 coreos-metadata[1448]: Apr 30 01:18:33.771 INFO Fetch successful Apr 30 01:18:33.773091 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 01:18:33.774423 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 01:18:33.780233 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 01:18:33.781787 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 01:18:33.803750 jq[1463]: true Apr 30 01:18:33.818032 extend-filesystems[1453]: Resized partition /dev/sda9 Apr 30 01:18:33.821842 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Apr 30 01:18:33.825816 tar[1466]: linux-arm64/helm Apr 30 01:18:33.835808 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 01:18:33.842236 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 01:18:33.852446 jq[1490]: true Apr 30 01:18:33.918721 update_engine[1462]: I20250430 01:18:33.911767 1462 main.cc:92] Flatcar Update Engine starting Apr 30 01:18:33.928489 systemd[1]: Started update-engine.service - Update Engine. Apr 30 01:18:33.933076 update_engine[1462]: I20250430 01:18:33.932602 1462 update_check_scheduler.cc:74] Next update check in 9m54s Apr 30 01:18:33.937202 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 01:18:33.963804 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1388) Apr 30 01:18:33.991610 systemd-logind[1460]: New seat seat0. Apr 30 01:18:33.994400 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 01:18:33.996667 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 01:18:34.009012 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 01:18:34.009039 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 01:18:34.009544 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 01:18:34.030126 bash[1518]: Updated "/home/core/.ssh/authorized_keys" Apr 30 01:18:34.035362 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 01:18:34.049467 systemd[1]: Starting sshkeys.service... Apr 30 01:18:34.055815 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 01:18:34.082218 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 01:18:34.100121 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 01:18:34.117604 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 01:18:34.117604 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 01:18:34.117604 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 01:18:34.120353 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Apr 30 01:18:34.120353 extend-filesystems[1453]: Found sr0 Apr 30 01:18:34.122661 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 01:18:34.122845 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 01:18:34.171855 coreos-metadata[1526]: Apr 30 01:18:34.171 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 01:18:34.174854 coreos-metadata[1526]: Apr 30 01:18:34.173 INFO Fetch successful Apr 30 01:18:34.178059 unknown[1526]: wrote ssh authorized keys file for user: core Apr 30 01:18:34.212061 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Apr 30 01:18:34.213277 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 01:18:34.219149 systemd[1]: Finished sshkeys.service. Apr 30 01:18:34.231375 containerd[1492]: time="2025-04-30T01:18:34.230602160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 01:18:34.283839 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 01:18:34.299235 containerd[1492]: time="2025-04-30T01:18:34.299146640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:34.302338 containerd[1492]: time="2025-04-30T01:18:34.302262720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:34.302338 containerd[1492]: time="2025-04-30T01:18:34.302337600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 01:18:34.302492 containerd[1492]: time="2025-04-30T01:18:34.302360360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 01:18:34.302569 containerd[1492]: time="2025-04-30T01:18:34.302543280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 01:18:34.302601 containerd[1492]: time="2025-04-30T01:18:34.302570640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.302634360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.302655320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.302852800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.302869840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.302882720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.302894560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.302999120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.303232800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.303333080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.303347960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 01:18:34.303717 containerd[1492]: time="2025-04-30T01:18:34.303425320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 01:18:34.303971 containerd[1492]: time="2025-04-30T01:18:34.303465520Z" level=info msg="metadata content store policy set" policy=shared Apr 30 01:18:34.310224 containerd[1492]: time="2025-04-30T01:18:34.310167680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 01:18:34.310316 containerd[1492]: time="2025-04-30T01:18:34.310253080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 01:18:34.310316 containerd[1492]: time="2025-04-30T01:18:34.310272360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 01:18:34.310704 containerd[1492]: time="2025-04-30T01:18:34.310295480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 01:18:34.310750 containerd[1492]: time="2025-04-30T01:18:34.310739600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 01:18:34.310970 containerd[1492]: time="2025-04-30T01:18:34.310948320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 01:18:34.311848 containerd[1492]: time="2025-04-30T01:18:34.311824760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 01:18:34.312061 containerd[1492]: time="2025-04-30T01:18:34.312037400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 01:18:34.312087 containerd[1492]: time="2025-04-30T01:18:34.312064440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 01:18:34.312087 containerd[1492]: time="2025-04-30T01:18:34.312080400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 01:18:34.312129 containerd[1492]: time="2025-04-30T01:18:34.312095880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 01:18:34.312129 containerd[1492]: time="2025-04-30T01:18:34.312110680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 01:18:34.312129 containerd[1492]: time="2025-04-30T01:18:34.312124320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 01:18:34.312178 containerd[1492]: time="2025-04-30T01:18:34.312141000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 01:18:34.312178 containerd[1492]: time="2025-04-30T01:18:34.312165880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 01:18:34.312211 containerd[1492]: time="2025-04-30T01:18:34.312180200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 01:18:34.312211 containerd[1492]: time="2025-04-30T01:18:34.312193480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 01:18:34.312211 containerd[1492]: time="2025-04-30T01:18:34.312205480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 01:18:34.312262 containerd[1492]: time="2025-04-30T01:18:34.312250360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312283 containerd[1492]: time="2025-04-30T01:18:34.312269000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312304 containerd[1492]: time="2025-04-30T01:18:34.312283280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312304 containerd[1492]: time="2025-04-30T01:18:34.312297120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312346 containerd[1492]: time="2025-04-30T01:18:34.312309040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312346 containerd[1492]: time="2025-04-30T01:18:34.312322520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312346 containerd[1492]: time="2025-04-30T01:18:34.312335440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312395 containerd[1492]: time="2025-04-30T01:18:34.312348800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312395 containerd[1492]: time="2025-04-30T01:18:34.312363200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312395 containerd[1492]: time="2025-04-30T01:18:34.312379200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312395 containerd[1492]: time="2025-04-30T01:18:34.312390440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312467 containerd[1492]: time="2025-04-30T01:18:34.312401480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312467 containerd[1492]: time="2025-04-30T01:18:34.312413280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312467 containerd[1492]: time="2025-04-30T01:18:34.312435520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 01:18:34.312467 containerd[1492]: time="2025-04-30T01:18:34.312464440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312537 containerd[1492]: time="2025-04-30T01:18:34.312477960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.312537 containerd[1492]: time="2025-04-30T01:18:34.312490680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 01:18:34.315714 containerd[1492]: time="2025-04-30T01:18:34.313474520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 01:18:34.315714 containerd[1492]: time="2025-04-30T01:18:34.313991800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 01:18:34.315714 containerd[1492]: time="2025-04-30T01:18:34.314011960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 01:18:34.315714 containerd[1492]: time="2025-04-30T01:18:34.314032840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 01:18:34.315714 containerd[1492]: time="2025-04-30T01:18:34.314043400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.315714 containerd[1492]: time="2025-04-30T01:18:34.314060800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 01:18:34.315714 containerd[1492]: time="2025-04-30T01:18:34.314071080Z" level=info msg="NRI interface is disabled by configuration." Apr 30 01:18:34.315714 containerd[1492]: time="2025-04-30T01:18:34.314083640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 01:18:34.315892 containerd[1492]: time="2025-04-30T01:18:34.314433800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 01:18:34.315892 containerd[1492]: time="2025-04-30T01:18:34.314490840Z" level=info msg="Connect containerd service" Apr 30 01:18:34.315892 containerd[1492]: time="2025-04-30T01:18:34.314525480Z" level=info msg="using legacy CRI server" Apr 30 01:18:34.315892 containerd[1492]: time="2025-04-30T01:18:34.314532320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 01:18:34.315892 containerd[1492]: time="2025-04-30T01:18:34.314642280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 01:18:34.317066 containerd[1492]: time="2025-04-30T01:18:34.316686360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 01:18:34.317606 containerd[1492]: time="2025-04-30T01:18:34.317558880Z" level=info msg="Start subscribing containerd event" Apr 30 01:18:34.317639 containerd[1492]: time="2025-04-30T01:18:34.317625560Z" level=info msg="Start recovering state" Apr 30 01:18:34.317733 containerd[1492]: time="2025-04-30T01:18:34.317702960Z" level=info msg="Start event monitor" Apr 30 01:18:34.317758 containerd[1492]: time="2025-04-30T01:18:34.317733240Z" level=info msg="Start snapshots syncer" Apr 30 01:18:34.317758 containerd[1492]: time="2025-04-30T01:18:34.317747240Z" level=info msg="Start cni network conf syncer for default" Apr 30 01:18:34.317758 containerd[1492]: time="2025-04-30T01:18:34.317755040Z" level=info msg="Start streaming server" Apr 30 01:18:34.319833 containerd[1492]: time="2025-04-30T01:18:34.319809920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 01:18:34.319893 containerd[1492]: time="2025-04-30T01:18:34.319878320Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 01:18:34.321887 containerd[1492]: time="2025-04-30T01:18:34.321247880Z" level=info msg="containerd successfully booted in 0.092987s" Apr 30 01:18:34.321350 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 01:18:34.564109 tar[1466]: linux-arm64/LICENSE Apr 30 01:18:34.565353 tar[1466]: linux-arm64/README.md Apr 30 01:18:34.578324 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 01:18:34.950881 systemd-networkd[1376]: eth0: Gained IPv6LL Apr 30 01:18:34.953597 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Apr 30 01:18:34.960215 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 01:18:34.963414 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 01:18:34.972649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:18:34.983516 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 01:18:35.019051 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 01:18:35.078872 systemd-networkd[1376]: eth1: Gained IPv6LL Apr 30 01:18:35.080529 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Apr 30 01:18:35.177729 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 01:18:35.205826 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 01:18:35.216464 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 01:18:35.227761 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 01:18:35.228562 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 01:18:35.235147 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 01:18:35.250079 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 01:18:35.261401 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 01:18:35.269298 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 01:18:35.270382 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 01:18:35.717066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:18:35.717179 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:18:35.719377 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 01:18:35.723825 systemd[1]: Startup finished in 788ms (kernel) + 5.261s (initrd) + 4.801s (userspace) = 10.852s. Apr 30 01:18:36.320789 kubelet[1580]: E0430 01:18:36.320734 1580 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:18:36.326292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:18:36.326579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:18:46.427889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 01:18:46.438098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:18:46.571038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:18:46.583476 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:18:46.641169 kubelet[1600]: E0430 01:18:46.641107 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:18:46.644815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:18:46.645129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:18:56.677520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 01:18:56.684071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:18:56.802112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:18:56.813136 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:18:56.866749 kubelet[1617]: E0430 01:18:56.866668 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:18:56.869376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:18:56.869548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:05.238291 systemd-timesyncd[1381]: Contacted time server 185.252.140.126:123 (2.flatcar.pool.ntp.org). Apr 30 01:19:05.238469 systemd-timesyncd[1381]: Initial clock synchronization to Wed 2025-04-30 01:19:05.457078 UTC. Apr 30 01:19:06.930660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 01:19:06.942002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:07.059890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:07.074461 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:07.131150 kubelet[1633]: E0430 01:19:07.131093 1633 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:07.134524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:07.134787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:17.177920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 01:19:17.191842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:17.297636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:17.305051 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:17.362496 kubelet[1649]: E0430 01:19:17.362434 1649 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:17.365579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:17.366082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:18.885939 update_engine[1462]: I20250430 01:19:18.885761 1462 update_attempter.cc:509] Updating boot flags... Apr 30 01:19:18.932893 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1666) Apr 30 01:19:19.002777 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1667) Apr 30 01:19:27.427495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 01:19:27.436079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:27.552943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:27.558462 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:27.602515 kubelet[1683]: E0430 01:19:27.602446 1683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:27.605954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:27.606127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:37.678055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 01:19:37.693460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:37.801040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:37.817290 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:37.867426 kubelet[1699]: E0430 01:19:37.867382 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:37.870527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:37.870680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:47.927946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 01:19:47.939174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:48.067144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:48.083369 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:48.131986 kubelet[1715]: E0430 01:19:48.131922 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:48.134982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:48.135342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:58.177795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 01:19:58.184047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:58.310208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:58.323835 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:58.370775 kubelet[1731]: E0430 01:19:58.370691 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:58.373450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:58.373673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:08.427405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 01:20:08.434064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:08.548917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:08.554559 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:08.611534 kubelet[1747]: E0430 01:20:08.611468 1747 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:08.614735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:08.614943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:15.607687 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 01:20:15.610208 systemd[1]: Started sshd@0-168.119.50.83:22-139.178.68.195:51278.service - OpenSSH per-connection server daemon (139.178.68.195:51278). Apr 30 01:20:16.616559 sshd[1756]: Accepted publickey for core from 139.178.68.195 port 51278 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:16.619989 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:16.630896 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 01:20:16.637100 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 01:20:16.639887 systemd-logind[1460]: New session 1 of user core. Apr 30 01:20:16.649383 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 01:20:16.656058 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 01:20:16.665832 (systemd)[1760]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 01:20:16.775393 systemd[1760]: Queued start job for default target default.target. Apr 30 01:20:16.785291 systemd[1760]: Created slice app.slice - User Application Slice. Apr 30 01:20:16.785616 systemd[1760]: Reached target paths.target - Paths. Apr 30 01:20:16.785825 systemd[1760]: Reached target timers.target - Timers. Apr 30 01:20:16.787886 systemd[1760]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 01:20:16.814405 systemd[1760]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 01:20:16.814627 systemd[1760]: Reached target sockets.target - Sockets. Apr 30 01:20:16.814654 systemd[1760]: Reached target basic.target - Basic System. Apr 30 01:20:16.814946 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 01:20:16.815246 systemd[1760]: Reached target default.target - Main User Target. Apr 30 01:20:16.815426 systemd[1760]: Startup finished in 141ms. Apr 30 01:20:16.824026 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 01:20:17.520793 systemd[1]: Started sshd@1-168.119.50.83:22-139.178.68.195:51294.service - OpenSSH per-connection server daemon (139.178.68.195:51294). Apr 30 01:20:18.493439 sshd[1771]: Accepted publickey for core from 139.178.68.195 port 51294 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:18.495818 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:18.501430 systemd-logind[1460]: New session 2 of user core. Apr 30 01:20:18.513046 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 01:20:18.677565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 01:20:18.689056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:18.829876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:18.835151 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:18.882496 kubelet[1782]: E0430 01:20:18.882445 1782 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:18.885506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:18.885853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:19.173370 sshd[1771]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:19.178904 systemd[1]: sshd@1-168.119.50.83:22-139.178.68.195:51294.service: Deactivated successfully. Apr 30 01:20:19.180659 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 01:20:19.181665 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Apr 30 01:20:19.183207 systemd-logind[1460]: Removed session 2. Apr 30 01:20:19.347223 systemd[1]: Started sshd@2-168.119.50.83:22-139.178.68.195:51304.service - OpenSSH per-connection server daemon (139.178.68.195:51304). Apr 30 01:20:20.320467 sshd[1794]: Accepted publickey for core from 139.178.68.195 port 51304 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:20.324622 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:20.331665 systemd-logind[1460]: New session 3 of user core. Apr 30 01:20:20.339696 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 01:20:20.999025 sshd[1794]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:21.005221 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Apr 30 01:20:21.005942 systemd[1]: sshd@2-168.119.50.83:22-139.178.68.195:51304.service: Deactivated successfully. Apr 30 01:20:21.008656 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 01:20:21.011207 systemd-logind[1460]: Removed session 3. Apr 30 01:20:21.175172 systemd[1]: Started sshd@3-168.119.50.83:22-139.178.68.195:51306.service - OpenSSH per-connection server daemon (139.178.68.195:51306). Apr 30 01:20:22.178426 sshd[1801]: Accepted publickey for core from 139.178.68.195 port 51306 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:22.181008 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:22.187224 systemd-logind[1460]: New session 4 of user core. Apr 30 01:20:22.196066 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 01:20:22.870362 sshd[1801]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:22.877644 systemd[1]: sshd@3-168.119.50.83:22-139.178.68.195:51306.service: Deactivated successfully. Apr 30 01:20:22.880567 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 01:20:22.882628 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Apr 30 01:20:22.884380 systemd-logind[1460]: Removed session 4. Apr 30 01:20:23.041274 systemd[1]: Started sshd@4-168.119.50.83:22-139.178.68.195:51316.service - OpenSSH per-connection server daemon (139.178.68.195:51316). Apr 30 01:20:24.010417 sshd[1808]: Accepted publickey for core from 139.178.68.195 port 51316 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:24.012350 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:24.018235 systemd-logind[1460]: New session 5 of user core. Apr 30 01:20:24.028216 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 01:20:24.539463 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 01:20:24.539834 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:20:24.556876 sudo[1811]: pam_unix(sudo:session): session closed for user root Apr 30 01:20:24.716917 sshd[1808]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:24.723658 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Apr 30 01:20:24.724216 systemd[1]: sshd@4-168.119.50.83:22-139.178.68.195:51316.service: Deactivated successfully. Apr 30 01:20:24.727585 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 01:20:24.728758 systemd-logind[1460]: Removed session 5. Apr 30 01:20:24.888460 systemd[1]: Started sshd@5-168.119.50.83:22-139.178.68.195:51326.service - OpenSSH per-connection server daemon (139.178.68.195:51326). Apr 30 01:20:25.870045 sshd[1816]: Accepted publickey for core from 139.178.68.195 port 51326 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:25.872807 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:25.878769 systemd-logind[1460]: New session 6 of user core. Apr 30 01:20:25.886023 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 01:20:26.395925 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 01:20:26.396241 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:20:26.400302 sudo[1820]: pam_unix(sudo:session): session closed for user root Apr 30 01:20:26.406377 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 01:20:26.406696 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:20:26.430229 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 01:20:26.432782 auditctl[1823]: No rules Apr 30 01:20:26.433968 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 01:20:26.434270 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 01:20:26.438073 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 01:20:26.475877 augenrules[1841]: No rules Apr 30 01:20:26.478550 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 01:20:26.480484 sudo[1819]: pam_unix(sudo:session): session closed for user root Apr 30 01:20:26.640249 sshd[1816]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:26.646227 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Apr 30 01:20:26.646671 systemd[1]: sshd@5-168.119.50.83:22-139.178.68.195:51326.service: Deactivated successfully. Apr 30 01:20:26.649031 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 01:20:26.650867 systemd-logind[1460]: Removed session 6. Apr 30 01:20:26.818155 systemd[1]: Started sshd@6-168.119.50.83:22-139.178.68.195:36396.service - OpenSSH per-connection server daemon (139.178.68.195:36396). Apr 30 01:20:27.794844 sshd[1849]: Accepted publickey for core from 139.178.68.195 port 36396 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:27.796879 sshd[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:27.801595 systemd-logind[1460]: New session 7 of user core. Apr 30 01:20:27.813066 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 01:20:28.312459 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 01:20:28.313216 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:20:28.625166 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 01:20:28.625370 (dockerd)[1867]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 01:20:28.885646 dockerd[1867]: time="2025-04-30T01:20:28.885053744Z" level=info msg="Starting up" Apr 30 01:20:28.927830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 01:20:28.936131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:29.009757 dockerd[1867]: time="2025-04-30T01:20:29.009520130Z" level=info msg="Loading containers: start." Apr 30 01:20:29.091430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:29.096081 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:29.146917 kernel: Initializing XFRM netlink socket Apr 30 01:20:29.163298 kubelet[1923]: E0430 01:20:29.163253 1923 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:29.167443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:29.167626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:29.232620 systemd-networkd[1376]: docker0: Link UP Apr 30 01:20:29.247458 dockerd[1867]: time="2025-04-30T01:20:29.246957974Z" level=info msg="Loading containers: done." Apr 30 01:20:29.263092 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1223653624-merged.mount: Deactivated successfully. Apr 30 01:20:29.265688 dockerd[1867]: time="2025-04-30T01:20:29.265632253Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 01:20:29.265846 dockerd[1867]: time="2025-04-30T01:20:29.265769302Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 01:20:29.265910 dockerd[1867]: time="2025-04-30T01:20:29.265894670Z" level=info msg="Daemon has completed initialization" Apr 30 01:20:29.301735 dockerd[1867]: time="2025-04-30T01:20:29.301516157Z" level=info msg="API listen on /run/docker.sock" Apr 30 01:20:29.302111 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 01:20:30.485009 containerd[1492]: time="2025-04-30T01:20:30.484856425Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 01:20:31.280322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455196987.mount: Deactivated successfully. Apr 30 01:20:33.325752 containerd[1492]: time="2025-04-30T01:20:33.325579233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:33.327441 containerd[1492]: time="2025-04-30T01:20:33.327340380Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794242" Apr 30 01:20:33.328800 containerd[1492]: time="2025-04-30T01:20:33.328755226Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:33.334296 containerd[1492]: time="2025-04-30T01:20:33.334232039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:33.336032 containerd[1492]: time="2025-04-30T01:20:33.335520958Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.850605569s" Apr 30 01:20:33.336032 containerd[1492]: time="2025-04-30T01:20:33.335568641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 01:20:33.358001 containerd[1492]: time="2025-04-30T01:20:33.357955003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 01:20:36.813979 containerd[1492]: time="2025-04-30T01:20:36.812870637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:36.815432 containerd[1492]: time="2025-04-30T01:20:36.815400826Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855570" Apr 30 01:20:36.816897 containerd[1492]: time="2025-04-30T01:20:36.816864152Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:36.820137 containerd[1492]: time="2025-04-30T01:20:36.820109343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:36.826191 containerd[1492]: time="2025-04-30T01:20:36.826140218Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 3.468142053s" Apr 30 01:20:36.826557 containerd[1492]: time="2025-04-30T01:20:36.826537002Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 01:20:36.851832 containerd[1492]: time="2025-04-30T01:20:36.851777247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 01:20:38.792639 containerd[1492]: time="2025-04-30T01:20:38.791033866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:38.792639 containerd[1492]: time="2025-04-30T01:20:38.792229535Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263965" Apr 30 01:20:38.793257 containerd[1492]: time="2025-04-30T01:20:38.793223192Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:38.797778 containerd[1492]: time="2025-04-30T01:20:38.797702731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:38.799247 containerd[1492]: time="2025-04-30T01:20:38.799174336Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.947328885s" Apr 30 01:20:38.799247 containerd[1492]: time="2025-04-30T01:20:38.799243620Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 01:20:38.825664 containerd[1492]: time="2025-04-30T01:20:38.825583460Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 01:20:39.177675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 01:20:39.186076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:39.309856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:39.314371 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:39.365839 kubelet[2112]: E0430 01:20:39.365788 2112 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:39.369824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:39.370322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:40.236212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136690954.mount: Deactivated successfully. Apr 30 01:20:40.584268 containerd[1492]: time="2025-04-30T01:20:40.582773982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:40.588924 containerd[1492]: time="2025-04-30T01:20:40.588872568Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775731" Apr 30 01:20:40.603532 containerd[1492]: time="2025-04-30T01:20:40.603465476Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:40.608138 containerd[1492]: time="2025-04-30T01:20:40.608086138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:40.608703 containerd[1492]: time="2025-04-30T01:20:40.608664690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.783018347s" Apr 30 01:20:40.608703 containerd[1492]: time="2025-04-30T01:20:40.608702453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 01:20:40.630910 containerd[1492]: time="2025-04-30T01:20:40.630861989Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 01:20:41.249542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178995317.mount: Deactivated successfully. Apr 30 01:20:42.594475 containerd[1492]: time="2025-04-30T01:20:42.594418617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:42.596082 containerd[1492]: time="2025-04-30T01:20:42.596006735Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Apr 30 01:20:42.596971 containerd[1492]: time="2025-04-30T01:20:42.596907129Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:42.600822 containerd[1492]: time="2025-04-30T01:20:42.600759611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:42.602108 containerd[1492]: time="2025-04-30T01:20:42.601934071Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.971020639s" Apr 30 01:20:42.602108 containerd[1492]: time="2025-04-30T01:20:42.601982108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 01:20:42.622866 containerd[1492]: time="2025-04-30T01:20:42.622824918Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 01:20:43.208590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677579429.mount: Deactivated successfully. Apr 30 01:20:43.216206 containerd[1492]: time="2025-04-30T01:20:43.216106064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:43.217493 containerd[1492]: time="2025-04-30T01:20:43.217420440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Apr 30 01:20:43.218528 containerd[1492]: time="2025-04-30T01:20:43.218452630Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:43.221055 containerd[1492]: time="2025-04-30T01:20:43.220924590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:43.221921 containerd[1492]: time="2025-04-30T01:20:43.221791468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 598.926991ms" Apr 30 01:20:43.221921 containerd[1492]: time="2025-04-30T01:20:43.221823626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 01:20:43.244137 containerd[1492]: time="2025-04-30T01:20:43.244083706Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 01:20:43.832200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863616788.mount: Deactivated successfully. Apr 30 01:20:49.427116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 30 01:20:49.437931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:49.567184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:49.576335 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:49.624976 kubelet[2241]: E0430 01:20:49.624935 2241 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:49.628848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:49.629007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:50.024613 containerd[1492]: time="2025-04-30T01:20:50.024553038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:50.025934 containerd[1492]: time="2025-04-30T01:20:50.025899476Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Apr 30 01:20:50.026919 containerd[1492]: time="2025-04-30T01:20:50.026828047Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:50.030640 containerd[1492]: time="2025-04-30T01:20:50.030575530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:50.032035 containerd[1492]: time="2025-04-30T01:20:50.031994846Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 6.787864862s" Apr 30 01:20:50.032259 containerd[1492]: time="2025-04-30T01:20:50.032149601Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 01:20:54.150770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:54.159218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:54.190770 systemd[1]: Reloading requested from client PID 2314 ('systemctl') (unit session-7.scope)... Apr 30 01:20:54.190929 systemd[1]: Reloading... Apr 30 01:20:54.312739 zram_generator::config[2352]: No configuration found. Apr 30 01:20:54.437312 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:20:54.510235 systemd[1]: Reloading finished in 318 ms. Apr 30 01:20:54.568113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:54.579998 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:54.580998 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 01:20:54.581427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:54.591303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:54.714491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:54.727233 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 01:20:54.780111 kubelet[2405]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:20:54.780111 kubelet[2405]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 01:20:54.780111 kubelet[2405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:20:54.780474 kubelet[2405]: I0430 01:20:54.780155 2405 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 01:20:55.277141 kubelet[2405]: I0430 01:20:55.276678 2405 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 01:20:55.277141 kubelet[2405]: I0430 01:20:55.276736 2405 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 01:20:55.277141 kubelet[2405]: I0430 01:20:55.276971 2405 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 01:20:55.304940 kubelet[2405]: E0430 01:20:55.304872 2405 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://168.119.50.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.305320 kubelet[2405]: I0430 01:20:55.305196 2405 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 01:20:55.314186 kubelet[2405]: I0430 01:20:55.314160 2405 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 01:20:55.316761 kubelet[2405]: I0430 01:20:55.315923 2405 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 01:20:55.316761 kubelet[2405]: I0430 01:20:55.315990 2405 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-a-62378e86a2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 01:20:55.316761 kubelet[2405]: I0430 01:20:55.316276 2405 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 01:20:55.316761 kubelet[2405]: I0430 01:20:55.316288 2405 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 01:20:55.317010 kubelet[2405]: I0430 01:20:55.316543 2405 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:20:55.319687 kubelet[2405]: I0430 01:20:55.318186 2405 kubelet.go:400] "Attempting to sync node with API server" Apr 30 01:20:55.319687 kubelet[2405]: I0430 01:20:55.318217 2405 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 01:20:55.319687 kubelet[2405]: I0430 01:20:55.318369 2405 kubelet.go:312] "Adding apiserver pod source" Apr 30 01:20:55.319687 kubelet[2405]: I0430 01:20:55.318455 2405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 01:20:55.320388 kubelet[2405]: I0430 01:20:55.320346 2405 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 01:20:55.320936 kubelet[2405]: I0430 01:20:55.320900 2405 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 01:20:55.321104 kubelet[2405]: W0430 01:20:55.321082 2405 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 01:20:55.322064 kubelet[2405]: I0430 01:20:55.322032 2405 server.go:1264] "Started kubelet" Apr 30 01:20:55.322241 kubelet[2405]: W0430 01:20:55.322190 2405 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.50.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.322309 kubelet[2405]: E0430 01:20:55.322292 2405 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://168.119.50.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.322397 kubelet[2405]: W0430 01:20:55.322365 2405 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.50.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-a-62378e86a2&limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.322397 kubelet[2405]: E0430 01:20:55.322397 2405 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://168.119.50.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-a-62378e86a2&limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.328631 kubelet[2405]: I0430 01:20:55.328588 2405 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 01:20:55.330800 kubelet[2405]: I0430 01:20:55.330773 2405 server.go:455] "Adding debug handlers to kubelet server" Apr 30 01:20:55.333215 kubelet[2405]: I0430 01:20:55.333128 2405 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 01:20:55.333512 kubelet[2405]: I0430 01:20:55.333452 2405 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 01:20:55.333795 kubelet[2405]: E0430 01:20:55.333574 2405 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.50.83:6443/api/v1/namespaces/default/events\": dial tcp 168.119.50.83:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-a-62378e86a2.183af3f8c3ce5de3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-a-62378e86a2,UID:ci-4081-3-3-a-62378e86a2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-a-62378e86a2,},FirstTimestamp:2025-04-30 01:20:55.322009059 +0000 UTC m=+0.590429708,LastTimestamp:2025-04-30 01:20:55.322009059 +0000 UTC m=+0.590429708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-a-62378e86a2,}" Apr 30 01:20:55.339671 kubelet[2405]: I0430 01:20:55.339475 2405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 01:20:55.343411 kubelet[2405]: E0430 01:20:55.343376 2405 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-a-62378e86a2\" not found" Apr 30 01:20:55.343688 kubelet[2405]: I0430 01:20:55.343667 2405 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 01:20:55.343869 kubelet[2405]: I0430 01:20:55.343849 2405 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 01:20:55.344011 kubelet[2405]: I0430 01:20:55.343991 2405 reconciler.go:26] "Reconciler: start to sync state" Apr 30 01:20:55.345542 kubelet[2405]: W0430 01:20:55.345457 2405 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.50.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.345542 kubelet[2405]: E0430 01:20:55.345538 2405 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://168.119.50.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.346247 kubelet[2405]: E0430 01:20:55.346213 2405 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 01:20:55.346758 kubelet[2405]: E0430 01:20:55.346683 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.50.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-a-62378e86a2?timeout=10s\": dial tcp 168.119.50.83:6443: connect: connection refused" interval="200ms" Apr 30 01:20:55.347605 kubelet[2405]: I0430 01:20:55.347489 2405 factory.go:221] Registration of the systemd container factory successfully Apr 30 01:20:55.347776 kubelet[2405]: I0430 01:20:55.347732 2405 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 01:20:55.350197 kubelet[2405]: I0430 01:20:55.350160 2405 factory.go:221] Registration of the containerd container factory successfully Apr 30 01:20:55.360268 kubelet[2405]: I0430 01:20:55.360109 2405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 01:20:55.361948 kubelet[2405]: I0430 01:20:55.361905 2405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 01:20:55.362081 kubelet[2405]: I0430 01:20:55.362071 2405 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 01:20:55.362130 kubelet[2405]: I0430 01:20:55.362123 2405 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 01:20:55.362210 kubelet[2405]: E0430 01:20:55.362190 2405 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 01:20:55.370162 kubelet[2405]: W0430 01:20:55.370036 2405 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.50.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.370162 kubelet[2405]: E0430 01:20:55.370124 2405 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://168.119.50.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:55.381799 kubelet[2405]: I0430 01:20:55.381704 2405 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 01:20:55.381799 kubelet[2405]: I0430 01:20:55.381743 2405 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 01:20:55.381799 kubelet[2405]: I0430 01:20:55.381767 2405 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:20:55.384595 kubelet[2405]: I0430 01:20:55.384553 2405 policy_none.go:49] "None policy: Start" Apr 30 01:20:55.385545 kubelet[2405]: I0430 01:20:55.385490 2405 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 01:20:55.385634 kubelet[2405]: I0430 01:20:55.385558 2405 state_mem.go:35] "Initializing new in-memory state store" Apr 30 01:20:55.393838 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 01:20:55.409386 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 01:20:55.413569 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 01:20:55.421138 kubelet[2405]: I0430 01:20:55.421110 2405 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 01:20:55.421554 kubelet[2405]: I0430 01:20:55.421486 2405 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 01:20:55.422248 kubelet[2405]: I0430 01:20:55.421884 2405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 01:20:55.426016 kubelet[2405]: E0430 01:20:55.425873 2405 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-a-62378e86a2\" not found" Apr 30 01:20:55.447461 kubelet[2405]: I0430 01:20:55.447344 2405 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.447917 kubelet[2405]: E0430 01:20:55.447873 2405 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.50.83:6443/api/v1/nodes\": dial tcp 168.119.50.83:6443: connect: connection refused" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.463479 kubelet[2405]: I0430 01:20:55.463275 2405 topology_manager.go:215] "Topology Admit Handler" podUID="c49471ec7bf784d03abb758e28b0c06d" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.466743 kubelet[2405]: I0430 01:20:55.466483 2405 topology_manager.go:215] "Topology Admit Handler" podUID="78f6ead8652a699e0469de8e098db20c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.469028 kubelet[2405]: I0430 01:20:55.468689 2405 topology_manager.go:215] "Topology Admit Handler" podUID="b2e07ae7539bed18a5e62a859fb66082" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.478169 systemd[1]: Created slice kubepods-burstable-podc49471ec7bf784d03abb758e28b0c06d.slice - libcontainer container kubepods-burstable-podc49471ec7bf784d03abb758e28b0c06d.slice. Apr 30 01:20:55.500487 systemd[1]: Created slice kubepods-burstable-pod78f6ead8652a699e0469de8e098db20c.slice - libcontainer container kubepods-burstable-pod78f6ead8652a699e0469de8e098db20c.slice. Apr 30 01:20:55.518902 systemd[1]: Created slice kubepods-burstable-podb2e07ae7539bed18a5e62a859fb66082.slice - libcontainer container kubepods-burstable-podb2e07ae7539bed18a5e62a859fb66082.slice. Apr 30 01:20:55.548798 kubelet[2405]: E0430 01:20:55.547934 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.50.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-a-62378e86a2?timeout=10s\": dial tcp 168.119.50.83:6443: connect: connection refused" interval="400ms" Apr 30 01:20:55.645391 kubelet[2405]: I0430 01:20:55.645317 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c49471ec7bf784d03abb758e28b0c06d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-a-62378e86a2\" (UID: \"c49471ec7bf784d03abb758e28b0c06d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.646252 kubelet[2405]: I0430 01:20:55.646126 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.646451 kubelet[2405]: I0430 01:20:55.646309 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.646451 kubelet[2405]: I0430 01:20:55.646384 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c49471ec7bf784d03abb758e28b0c06d-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-a-62378e86a2\" (UID: \"c49471ec7bf784d03abb758e28b0c06d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.646586 kubelet[2405]: I0430 01:20:55.646453 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.646586 kubelet[2405]: I0430 01:20:55.646505 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.646586 kubelet[2405]: I0430 01:20:55.646548 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.646586 kubelet[2405]: I0430 01:20:55.646566 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2e07ae7539bed18a5e62a859fb66082-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-a-62378e86a2\" (UID: \"b2e07ae7539bed18a5e62a859fb66082\") " pod="kube-system/kube-scheduler-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.646586 kubelet[2405]: I0430 01:20:55.646588 2405 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c49471ec7bf784d03abb758e28b0c06d-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-a-62378e86a2\" (UID: \"c49471ec7bf784d03abb758e28b0c06d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.650724 kubelet[2405]: I0430 01:20:55.650629 2405 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.651117 kubelet[2405]: E0430 01:20:55.651085 2405 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.50.83:6443/api/v1/nodes\": dial tcp 168.119.50.83:6443: connect: connection refused" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:55.795296 containerd[1492]: time="2025-04-30T01:20:55.795115668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-a-62378e86a2,Uid:c49471ec7bf784d03abb758e28b0c06d,Namespace:kube-system,Attempt:0,}" Apr 30 01:20:55.808087 containerd[1492]: time="2025-04-30T01:20:55.807272853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-a-62378e86a2,Uid:78f6ead8652a699e0469de8e098db20c,Namespace:kube-system,Attempt:0,}" Apr 30 01:20:55.828046 containerd[1492]: time="2025-04-30T01:20:55.827745022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-a-62378e86a2,Uid:b2e07ae7539bed18a5e62a859fb66082,Namespace:kube-system,Attempt:0,}" Apr 30 01:20:55.948770 kubelet[2405]: E0430 01:20:55.948693 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.50.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-a-62378e86a2?timeout=10s\": dial tcp 168.119.50.83:6443: connect: connection refused" interval="800ms" Apr 30 01:20:56.056738 kubelet[2405]: I0430 01:20:56.056088 2405 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:56.057657 kubelet[2405]: E0430 01:20:56.057612 2405 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.50.83:6443/api/v1/nodes\": dial tcp 168.119.50.83:6443: connect: connection refused" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:56.201662 kubelet[2405]: W0430 01:20:56.201344 2405 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.50.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-a-62378e86a2&limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:56.201662 kubelet[2405]: E0430 01:20:56.201573 2405 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://168.119.50.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-a-62378e86a2&limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:56.385813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601600630.mount: Deactivated successfully. Apr 30 01:20:56.392758 containerd[1492]: time="2025-04-30T01:20:56.392277994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:56.393976 containerd[1492]: time="2025-04-30T01:20:56.393921123Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 01:20:56.398281 containerd[1492]: time="2025-04-30T01:20:56.398154522Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:56.399347 containerd[1492]: time="2025-04-30T01:20:56.399235421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 01:20:56.399767 containerd[1492]: time="2025-04-30T01:20:56.399658093Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:56.400436 containerd[1492]: time="2025-04-30T01:20:56.400350760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 01:20:56.400558 containerd[1492]: time="2025-04-30T01:20:56.400465837Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:56.405164 containerd[1492]: time="2025-04-30T01:20:56.405059989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:56.406680 containerd[1492]: time="2025-04-30T01:20:56.406087290Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 598.727079ms" Apr 30 01:20:56.408298 containerd[1492]: time="2025-04-30T01:20:56.408217289Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.390788ms" Apr 30 01:20:56.409260 containerd[1492]: time="2025-04-30T01:20:56.409214430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 614.012083ms" Apr 30 01:20:56.504422 kubelet[2405]: W0430 01:20:56.504284 2405 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.50.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:56.505370 kubelet[2405]: E0430 01:20:56.504868 2405 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://168.119.50.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:56.539041 containerd[1492]: time="2025-04-30T01:20:56.538145477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:20:56.539041 containerd[1492]: time="2025-04-30T01:20:56.538213516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:20:56.539041 containerd[1492]: time="2025-04-30T01:20:56.538228275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:56.539041 containerd[1492]: time="2025-04-30T01:20:56.538316554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:56.542093 containerd[1492]: time="2025-04-30T01:20:56.541996083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:20:56.542246 containerd[1492]: time="2025-04-30T01:20:56.542060522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:20:56.542246 containerd[1492]: time="2025-04-30T01:20:56.542078481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:56.542246 containerd[1492]: time="2025-04-30T01:20:56.542166240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:56.543777 containerd[1492]: time="2025-04-30T01:20:56.543620292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:20:56.543939 containerd[1492]: time="2025-04-30T01:20:56.543915166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:20:56.544039 containerd[1492]: time="2025-04-30T01:20:56.544018084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:56.545318 containerd[1492]: time="2025-04-30T01:20:56.545238061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:56.550626 kubelet[2405]: W0430 01:20:56.550448 2405 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.50.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:56.550626 kubelet[2405]: E0430 01:20:56.550573 2405 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://168.119.50.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:56.571085 systemd[1]: Started cri-containerd-3f6378f03c0be40c4bbe18ce0677cef923e0465b40d3fd27f149cb2c8b7073ad.scope - libcontainer container 3f6378f03c0be40c4bbe18ce0677cef923e0465b40d3fd27f149cb2c8b7073ad. Apr 30 01:20:56.576575 systemd[1]: Started cri-containerd-498fc5ddee7a2e215c50d8808e93984e9d765b64842543425c3b3d90c4074835.scope - libcontainer container 498fc5ddee7a2e215c50d8808e93984e9d765b64842543425c3b3d90c4074835. Apr 30 01:20:56.577656 systemd[1]: Started cri-containerd-d52eef805bf222c6a5e93d4d30b35687fa4bd20dc2b82814803b8c58375e8b67.scope - libcontainer container d52eef805bf222c6a5e93d4d30b35687fa4bd20dc2b82814803b8c58375e8b67. Apr 30 01:20:56.638798 kubelet[2405]: W0430 01:20:56.638692 2405 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.50.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:56.638798 kubelet[2405]: E0430 01:20:56.638796 2405 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://168.119.50.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.50.83:6443: connect: connection refused Apr 30 01:20:56.641542 containerd[1492]: time="2025-04-30T01:20:56.641344138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-a-62378e86a2,Uid:78f6ead8652a699e0469de8e098db20c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f6378f03c0be40c4bbe18ce0677cef923e0465b40d3fd27f149cb2c8b7073ad\"" Apr 30 01:20:56.648686 containerd[1492]: time="2025-04-30T01:20:56.648577799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-a-62378e86a2,Uid:c49471ec7bf784d03abb758e28b0c06d,Namespace:kube-system,Attempt:0,} returns sandbox id \"498fc5ddee7a2e215c50d8808e93984e9d765b64842543425c3b3d90c4074835\"" Apr 30 01:20:56.651923 containerd[1492]: time="2025-04-30T01:20:56.651754778Z" level=info msg="CreateContainer within sandbox \"3f6378f03c0be40c4bbe18ce0677cef923e0465b40d3fd27f149cb2c8b7073ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 01:20:56.655095 containerd[1492]: time="2025-04-30T01:20:56.654930357Z" level=info msg="CreateContainer within sandbox \"498fc5ddee7a2e215c50d8808e93984e9d765b64842543425c3b3d90c4074835\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 01:20:56.660803 containerd[1492]: time="2025-04-30T01:20:56.660553809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-a-62378e86a2,Uid:b2e07ae7539bed18a5e62a859fb66082,Namespace:kube-system,Attempt:0,} returns sandbox id \"d52eef805bf222c6a5e93d4d30b35687fa4bd20dc2b82814803b8c58375e8b67\"" Apr 30 01:20:56.664783 containerd[1492]: time="2025-04-30T01:20:56.664686930Z" level=info msg="CreateContainer within sandbox \"d52eef805bf222c6a5e93d4d30b35687fa4bd20dc2b82814803b8c58375e8b67\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 01:20:56.674331 containerd[1492]: time="2025-04-30T01:20:56.674280146Z" level=info msg="CreateContainer within sandbox \"498fc5ddee7a2e215c50d8808e93984e9d765b64842543425c3b3d90c4074835\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a44359b6833e37ca75afc21d4f1d32adc2f40ce67aa8b906e9f33bbcc03349e\"" Apr 30 01:20:56.675177 containerd[1492]: time="2025-04-30T01:20:56.675149049Z" level=info msg="StartContainer for \"3a44359b6833e37ca75afc21d4f1d32adc2f40ce67aa8b906e9f33bbcc03349e\"" Apr 30 01:20:56.682690 containerd[1492]: time="2025-04-30T01:20:56.682610306Z" level=info msg="CreateContainer within sandbox \"3f6378f03c0be40c4bbe18ce0677cef923e0465b40d3fd27f149cb2c8b7073ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296\"" Apr 30 01:20:56.683912 containerd[1492]: time="2025-04-30T01:20:56.683741364Z" level=info msg="CreateContainer within sandbox \"d52eef805bf222c6a5e93d4d30b35687fa4bd20dc2b82814803b8c58375e8b67\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7\"" Apr 30 01:20:56.684198 containerd[1492]: time="2025-04-30T01:20:56.684177476Z" level=info msg="StartContainer for \"4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296\"" Apr 30 01:20:56.686002 containerd[1492]: time="2025-04-30T01:20:56.685975322Z" level=info msg="StartContainer for \"1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7\"" Apr 30 01:20:56.714929 systemd[1]: Started cri-containerd-3a44359b6833e37ca75afc21d4f1d32adc2f40ce67aa8b906e9f33bbcc03349e.scope - libcontainer container 3a44359b6833e37ca75afc21d4f1d32adc2f40ce67aa8b906e9f33bbcc03349e. Apr 30 01:20:56.731968 systemd[1]: Started cri-containerd-4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296.scope - libcontainer container 4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296. Apr 30 01:20:56.741977 systemd[1]: Started cri-containerd-1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7.scope - libcontainer container 1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7. Apr 30 01:20:56.750870 kubelet[2405]: E0430 01:20:56.750620 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.50.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-a-62378e86a2?timeout=10s\": dial tcp 168.119.50.83:6443: connect: connection refused" interval="1.6s" Apr 30 01:20:56.802688 containerd[1492]: time="2025-04-30T01:20:56.802278531Z" level=info msg="StartContainer for \"3a44359b6833e37ca75afc21d4f1d32adc2f40ce67aa8b906e9f33bbcc03349e\" returns successfully" Apr 30 01:20:56.811232 containerd[1492]: time="2025-04-30T01:20:56.810788648Z" level=info msg="StartContainer for \"1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7\" returns successfully" Apr 30 01:20:56.826266 containerd[1492]: time="2025-04-30T01:20:56.825737961Z" level=info msg="StartContainer for \"4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296\" returns successfully" Apr 30 01:20:56.863136 kubelet[2405]: I0430 01:20:56.863089 2405 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:56.863877 kubelet[2405]: E0430 01:20:56.863834 2405 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.50.83:6443/api/v1/nodes\": dial tcp 168.119.50.83:6443: connect: connection refused" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:58.466093 kubelet[2405]: I0430 01:20:58.466051 2405 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:59.453276 kubelet[2405]: E0430 01:20:59.453199 2405 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-a-62378e86a2\" not found" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:20:59.562963 kubelet[2405]: I0430 01:20:59.562771 2405 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:00.322748 kubelet[2405]: I0430 01:21:00.322527 2405 apiserver.go:52] "Watching apiserver" Apr 30 01:21:00.344633 kubelet[2405]: I0430 01:21:00.344526 2405 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 01:21:01.671239 systemd[1]: Reloading requested from client PID 2673 ('systemctl') (unit session-7.scope)... Apr 30 01:21:01.671260 systemd[1]: Reloading... Apr 30 01:21:01.764749 zram_generator::config[2716]: No configuration found. Apr 30 01:21:01.888000 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:21:01.978952 systemd[1]: Reloading finished in 307 ms. Apr 30 01:21:02.023319 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:21:02.038893 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 01:21:02.039215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:21:02.039281 systemd[1]: kubelet.service: Consumed 1.058s CPU time, 113.5M memory peak, 0B memory swap peak. Apr 30 01:21:02.050983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:21:02.175597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:21:02.189298 (kubelet)[2758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 01:21:02.257985 kubelet[2758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:21:02.257985 kubelet[2758]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 01:21:02.257985 kubelet[2758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:21:02.257985 kubelet[2758]: I0430 01:21:02.257021 2758 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 01:21:02.266536 kubelet[2758]: I0430 01:21:02.264849 2758 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 01:21:02.266536 kubelet[2758]: I0430 01:21:02.264877 2758 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 01:21:02.266536 kubelet[2758]: I0430 01:21:02.265090 2758 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 01:21:02.267461 kubelet[2758]: I0430 01:21:02.267440 2758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 01:21:02.269279 kubelet[2758]: I0430 01:21:02.269254 2758 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 01:21:02.277124 kubelet[2758]: I0430 01:21:02.277099 2758 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 01:21:02.277556 kubelet[2758]: I0430 01:21:02.277527 2758 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 01:21:02.277820 kubelet[2758]: I0430 01:21:02.277615 2758 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-a-62378e86a2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 01:21:02.277958 kubelet[2758]: I0430 01:21:02.277945 2758 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 01:21:02.278033 kubelet[2758]: I0430 01:21:02.278023 2758 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 01:21:02.278137 kubelet[2758]: I0430 01:21:02.278127 2758 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:21:02.278310 kubelet[2758]: I0430 01:21:02.278299 2758 kubelet.go:400] "Attempting to sync node with API server" Apr 30 01:21:02.278819 kubelet[2758]: I0430 01:21:02.278804 2758 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 01:21:02.278927 kubelet[2758]: I0430 01:21:02.278918 2758 kubelet.go:312] "Adding apiserver pod source" Apr 30 01:21:02.280790 kubelet[2758]: I0430 01:21:02.280766 2758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 01:21:02.282104 kubelet[2758]: I0430 01:21:02.282084 2758 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 01:21:02.282359 kubelet[2758]: I0430 01:21:02.282342 2758 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 01:21:02.282931 kubelet[2758]: I0430 01:21:02.282914 2758 server.go:1264] "Started kubelet" Apr 30 01:21:02.287562 kubelet[2758]: I0430 01:21:02.287530 2758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 01:21:02.296577 kubelet[2758]: I0430 01:21:02.296536 2758 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 01:21:02.297683 kubelet[2758]: I0430 01:21:02.297655 2758 server.go:455] "Adding debug handlers to kubelet server" Apr 30 01:21:02.300871 kubelet[2758]: I0430 01:21:02.299679 2758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 01:21:02.300871 kubelet[2758]: I0430 01:21:02.301234 2758 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 01:21:02.306012 kubelet[2758]: I0430 01:21:02.305975 2758 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 01:21:02.317144 kubelet[2758]: I0430 01:21:02.317109 2758 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 01:21:02.317503 kubelet[2758]: I0430 01:21:02.317486 2758 reconciler.go:26] "Reconciler: start to sync state" Apr 30 01:21:02.323621 kubelet[2758]: I0430 01:21:02.323579 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 01:21:02.325273 kubelet[2758]: I0430 01:21:02.325239 2758 factory.go:221] Registration of the systemd container factory successfully Apr 30 01:21:02.325764 kubelet[2758]: I0430 01:21:02.325548 2758 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 01:21:02.336741 kubelet[2758]: I0430 01:21:02.335955 2758 factory.go:221] Registration of the containerd container factory successfully Apr 30 01:21:02.341582 kubelet[2758]: E0430 01:21:02.341554 2758 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 01:21:02.344096 kubelet[2758]: I0430 01:21:02.344050 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 01:21:02.344338 kubelet[2758]: I0430 01:21:02.344319 2758 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 01:21:02.344509 kubelet[2758]: I0430 01:21:02.344489 2758 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 01:21:02.344698 kubelet[2758]: E0430 01:21:02.344659 2758 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 01:21:02.386447 kubelet[2758]: I0430 01:21:02.386376 2758 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 01:21:02.386447 kubelet[2758]: I0430 01:21:02.386437 2758 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 01:21:02.386626 kubelet[2758]: I0430 01:21:02.386463 2758 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:21:02.386652 kubelet[2758]: I0430 01:21:02.386636 2758 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 01:21:02.386675 kubelet[2758]: I0430 01:21:02.386655 2758 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 01:21:02.386675 kubelet[2758]: I0430 01:21:02.386674 2758 policy_none.go:49] "None policy: Start" Apr 30 01:21:02.389006 kubelet[2758]: I0430 01:21:02.388931 2758 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 01:21:02.389715 kubelet[2758]: I0430 01:21:02.389615 2758 state_mem.go:35] "Initializing new in-memory state store" Apr 30 01:21:02.389946 kubelet[2758]: I0430 01:21:02.389848 2758 state_mem.go:75] "Updated machine memory state" Apr 30 01:21:02.398619 kubelet[2758]: I0430 01:21:02.398558 2758 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 01:21:02.398826 kubelet[2758]: I0430 01:21:02.398787 2758 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 01:21:02.398920 kubelet[2758]: I0430 01:21:02.398909 2758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 01:21:02.422420 kubelet[2758]: I0430 01:21:02.421080 2758 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.434201 kubelet[2758]: I0430 01:21:02.434171 2758 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.434552 kubelet[2758]: I0430 01:21:02.434539 2758 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.445630 kubelet[2758]: I0430 01:21:02.445562 2758 topology_manager.go:215] "Topology Admit Handler" podUID="c49471ec7bf784d03abb758e28b0c06d" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.446862 kubelet[2758]: I0430 01:21:02.445778 2758 topology_manager.go:215] "Topology Admit Handler" podUID="78f6ead8652a699e0469de8e098db20c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.446862 kubelet[2758]: I0430 01:21:02.445841 2758 topology_manager.go:215] "Topology Admit Handler" podUID="b2e07ae7539bed18a5e62a859fb66082" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.519231 kubelet[2758]: I0430 01:21:02.518453 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.520149 kubelet[2758]: I0430 01:21:02.519682 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.520149 kubelet[2758]: I0430 01:21:02.519796 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.520149 kubelet[2758]: I0430 01:21:02.519847 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c49471ec7bf784d03abb758e28b0c06d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-a-62378e86a2\" (UID: \"c49471ec7bf784d03abb758e28b0c06d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.520149 kubelet[2758]: I0430 01:21:02.519886 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c49471ec7bf784d03abb758e28b0c06d-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-a-62378e86a2\" (UID: \"c49471ec7bf784d03abb758e28b0c06d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.520149 kubelet[2758]: I0430 01:21:02.519922 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.520527 kubelet[2758]: I0430 01:21:02.519960 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78f6ead8652a699e0469de8e098db20c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-a-62378e86a2\" (UID: \"78f6ead8652a699e0469de8e098db20c\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.520527 kubelet[2758]: I0430 01:21:02.519996 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2e07ae7539bed18a5e62a859fb66082-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-a-62378e86a2\" (UID: \"b2e07ae7539bed18a5e62a859fb66082\") " pod="kube-system/kube-scheduler-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:02.520527 kubelet[2758]: I0430 01:21:02.520030 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c49471ec7bf784d03abb758e28b0c06d-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-a-62378e86a2\" (UID: \"c49471ec7bf784d03abb758e28b0c06d\") " pod="kube-system/kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:03.282223 kubelet[2758]: I0430 01:21:03.282162 2758 apiserver.go:52] "Watching apiserver" Apr 30 01:21:03.318474 kubelet[2758]: I0430 01:21:03.318419 2758 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 01:21:03.422750 kubelet[2758]: E0430 01:21:03.422689 2758 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-a-62378e86a2\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-a-62378e86a2" Apr 30 01:21:03.509722 kubelet[2758]: I0430 01:21:03.509615 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-a-62378e86a2" podStartSLOduration=1.509594764 podStartE2EDuration="1.509594764s" podCreationTimestamp="2025-04-30 01:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:03.471745216 +0000 UTC m=+1.276987176" watchObservedRunningTime="2025-04-30 01:21:03.509594764 +0000 UTC m=+1.314836724" Apr 30 01:21:03.534552 kubelet[2758]: I0430 01:21:03.534379 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-a-62378e86a2" podStartSLOduration=1.534345252 podStartE2EDuration="1.534345252s" podCreationTimestamp="2025-04-30 01:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:03.534199213 +0000 UTC m=+1.339441173" watchObservedRunningTime="2025-04-30 01:21:03.534345252 +0000 UTC m=+1.339587212" Apr 30 01:21:03.534552 kubelet[2758]: I0430 01:21:03.534488 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-a-62378e86a2" podStartSLOduration=1.534481411 podStartE2EDuration="1.534481411s" podCreationTimestamp="2025-04-30 01:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:03.510437957 +0000 UTC m=+1.315679917" watchObservedRunningTime="2025-04-30 01:21:03.534481411 +0000 UTC m=+1.339723371" Apr 30 01:21:07.544915 sudo[1852]: pam_unix(sudo:session): session closed for user root Apr 30 01:21:07.704169 sshd[1849]: pam_unix(sshd:session): session closed for user core Apr 30 01:21:07.710161 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Apr 30 01:21:07.710936 systemd[1]: sshd@6-168.119.50.83:22-139.178.68.195:36396.service: Deactivated successfully. Apr 30 01:21:07.713298 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 01:21:07.713656 systemd[1]: session-7.scope: Consumed 5.747s CPU time, 187.3M memory peak, 0B memory swap peak. Apr 30 01:21:07.714969 systemd-logind[1460]: Removed session 7. Apr 30 01:21:14.859612 kubelet[2758]: I0430 01:21:14.859549 2758 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 01:21:14.860671 kubelet[2758]: I0430 01:21:14.860279 2758 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 01:21:14.861251 containerd[1492]: time="2025-04-30T01:21:14.860084361Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 01:21:15.896090 kubelet[2758]: I0430 01:21:15.896039 2758 topology_manager.go:215] "Topology Admit Handler" podUID="d09c1061-5104-4d18-b92c-165316bc402a" podNamespace="kube-system" podName="kube-proxy-95m87" Apr 30 01:21:15.902944 kubelet[2758]: I0430 01:21:15.902903 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75rc\" (UniqueName: \"kubernetes.io/projected/d09c1061-5104-4d18-b92c-165316bc402a-kube-api-access-m75rc\") pod \"kube-proxy-95m87\" (UID: \"d09c1061-5104-4d18-b92c-165316bc402a\") " pod="kube-system/kube-proxy-95m87" Apr 30 01:21:15.902944 kubelet[2758]: I0430 01:21:15.903014 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d09c1061-5104-4d18-b92c-165316bc402a-kube-proxy\") pod \"kube-proxy-95m87\" (UID: \"d09c1061-5104-4d18-b92c-165316bc402a\") " pod="kube-system/kube-proxy-95m87" Apr 30 01:21:15.902944 kubelet[2758]: I0430 01:21:15.903039 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d09c1061-5104-4d18-b92c-165316bc402a-xtables-lock\") pod \"kube-proxy-95m87\" (UID: \"d09c1061-5104-4d18-b92c-165316bc402a\") " pod="kube-system/kube-proxy-95m87" Apr 30 01:21:15.904155 kubelet[2758]: I0430 01:21:15.903057 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d09c1061-5104-4d18-b92c-165316bc402a-lib-modules\") pod \"kube-proxy-95m87\" (UID: \"d09c1061-5104-4d18-b92c-165316bc402a\") " pod="kube-system/kube-proxy-95m87" Apr 30 01:21:15.907802 systemd[1]: Created slice kubepods-besteffort-podd09c1061_5104_4d18_b92c_165316bc402a.slice - libcontainer container kubepods-besteffort-podd09c1061_5104_4d18_b92c_165316bc402a.slice. Apr 30 01:21:15.969277 kubelet[2758]: I0430 01:21:15.969213 2758 topology_manager.go:215] "Topology Admit Handler" podUID="cfb89852-9dcb-470d-bef1-72fdd1c8494a" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-vtbfr" Apr 30 01:21:15.980915 systemd[1]: Created slice kubepods-besteffort-podcfb89852_9dcb_470d_bef1_72fdd1c8494a.slice - libcontainer container kubepods-besteffort-podcfb89852_9dcb_470d_bef1_72fdd1c8494a.slice. Apr 30 01:21:16.005173 kubelet[2758]: I0430 01:21:16.005102 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5j2d\" (UniqueName: \"kubernetes.io/projected/cfb89852-9dcb-470d-bef1-72fdd1c8494a-kube-api-access-c5j2d\") pod \"tigera-operator-797db67f8-vtbfr\" (UID: \"cfb89852-9dcb-470d-bef1-72fdd1c8494a\") " pod="tigera-operator/tigera-operator-797db67f8-vtbfr" Apr 30 01:21:16.005413 kubelet[2758]: I0430 01:21:16.005205 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfb89852-9dcb-470d-bef1-72fdd1c8494a-var-lib-calico\") pod \"tigera-operator-797db67f8-vtbfr\" (UID: \"cfb89852-9dcb-470d-bef1-72fdd1c8494a\") " pod="tigera-operator/tigera-operator-797db67f8-vtbfr" Apr 30 01:21:16.221427 containerd[1492]: time="2025-04-30T01:21:16.220911833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95m87,Uid:d09c1061-5104-4d18-b92c-165316bc402a,Namespace:kube-system,Attempt:0,}" Apr 30 01:21:16.250102 containerd[1492]: time="2025-04-30T01:21:16.249996260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:16.250102 containerd[1492]: time="2025-04-30T01:21:16.250056181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:16.250102 containerd[1492]: time="2025-04-30T01:21:16.250071461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:16.250547 containerd[1492]: time="2025-04-30T01:21:16.250306103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:16.272960 systemd[1]: Started cri-containerd-453744297f61ee90f032618c5719423bbe852cc87d3f511ca5bcd738a0b041d8.scope - libcontainer container 453744297f61ee90f032618c5719423bbe852cc87d3f511ca5bcd738a0b041d8. Apr 30 01:21:16.287060 containerd[1492]: time="2025-04-30T01:21:16.287009390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-vtbfr,Uid:cfb89852-9dcb-470d-bef1-72fdd1c8494a,Namespace:tigera-operator,Attempt:0,}" Apr 30 01:21:16.300733 containerd[1492]: time="2025-04-30T01:21:16.300512456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95m87,Uid:d09c1061-5104-4d18-b92c-165316bc402a,Namespace:kube-system,Attempt:0,} returns sandbox id \"453744297f61ee90f032618c5719423bbe852cc87d3f511ca5bcd738a0b041d8\"" Apr 30 01:21:16.314735 containerd[1492]: time="2025-04-30T01:21:16.314517086Z" level=info msg="CreateContainer within sandbox \"453744297f61ee90f032618c5719423bbe852cc87d3f511ca5bcd738a0b041d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 01:21:16.328733 containerd[1492]: time="2025-04-30T01:21:16.328464035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:16.329552 containerd[1492]: time="2025-04-30T01:21:16.328936118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:16.329552 containerd[1492]: time="2025-04-30T01:21:16.328978399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:16.329552 containerd[1492]: time="2025-04-30T01:21:16.329077480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:16.337146 containerd[1492]: time="2025-04-30T01:21:16.337097142Z" level=info msg="CreateContainer within sandbox \"453744297f61ee90f032618c5719423bbe852cc87d3f511ca5bcd738a0b041d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5945fe5c1651dfbdda5872ff0ee41805836e878b3028c9df12b4304a17b5aa44\"" Apr 30 01:21:16.338206 containerd[1492]: time="2025-04-30T01:21:16.338174991Z" level=info msg="StartContainer for \"5945fe5c1651dfbdda5872ff0ee41805836e878b3028c9df12b4304a17b5aa44\"" Apr 30 01:21:16.354822 systemd[1]: Started cri-containerd-34ca3723cce97b7019472c5ce06b58bba4c2320b86a9f61541ffd753cc71e8e2.scope - libcontainer container 34ca3723cce97b7019472c5ce06b58bba4c2320b86a9f61541ffd753cc71e8e2. Apr 30 01:21:16.377957 systemd[1]: Started cri-containerd-5945fe5c1651dfbdda5872ff0ee41805836e878b3028c9df12b4304a17b5aa44.scope - libcontainer container 5945fe5c1651dfbdda5872ff0ee41805836e878b3028c9df12b4304a17b5aa44. Apr 30 01:21:16.411876 containerd[1492]: time="2025-04-30T01:21:16.411645406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-vtbfr,Uid:cfb89852-9dcb-470d-bef1-72fdd1c8494a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"34ca3723cce97b7019472c5ce06b58bba4c2320b86a9f61541ffd753cc71e8e2\"" Apr 30 01:21:16.419926 containerd[1492]: time="2025-04-30T01:21:16.418921863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 01:21:16.424938 containerd[1492]: time="2025-04-30T01:21:16.424486587Z" level=info msg="StartContainer for \"5945fe5c1651dfbdda5872ff0ee41805836e878b3028c9df12b4304a17b5aa44\" returns successfully" Apr 30 01:21:18.612124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755570408.mount: Deactivated successfully. Apr 30 01:21:18.987807 containerd[1492]: time="2025-04-30T01:21:18.987039529Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:18.989150 containerd[1492]: time="2025-04-30T01:21:18.989078029Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 01:21:18.990607 containerd[1492]: time="2025-04-30T01:21:18.990532883Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:18.994975 containerd[1492]: time="2025-04-30T01:21:18.994900885Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:18.996030 containerd[1492]: time="2025-04-30T01:21:18.995967896Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.576344668s" Apr 30 01:21:18.996030 containerd[1492]: time="2025-04-30T01:21:18.996008216Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 01:21:18.999299 containerd[1492]: time="2025-04-30T01:21:18.999234848Z" level=info msg="CreateContainer within sandbox \"34ca3723cce97b7019472c5ce06b58bba4c2320b86a9f61541ffd753cc71e8e2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 01:21:19.019172 containerd[1492]: time="2025-04-30T01:21:19.019126057Z" level=info msg="CreateContainer within sandbox \"34ca3723cce97b7019472c5ce06b58bba4c2320b86a9f61541ffd753cc71e8e2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72\"" Apr 30 01:21:19.019897 containerd[1492]: time="2025-04-30T01:21:19.019856305Z" level=info msg="StartContainer for \"016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72\"" Apr 30 01:21:19.055048 systemd[1]: Started cri-containerd-016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72.scope - libcontainer container 016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72. Apr 30 01:21:19.086261 containerd[1492]: time="2025-04-30T01:21:19.086208728Z" level=info msg="StartContainer for \"016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72\" returns successfully" Apr 30 01:21:19.431352 kubelet[2758]: I0430 01:21:19.431176 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-95m87" podStartSLOduration=4.431157263 podStartE2EDuration="4.431157263s" podCreationTimestamp="2025-04-30 01:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:17.421734835 +0000 UTC m=+15.226976835" watchObservedRunningTime="2025-04-30 01:21:19.431157263 +0000 UTC m=+17.236399223" Apr 30 01:21:19.431352 kubelet[2758]: I0430 01:21:19.431316 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-vtbfr" podStartSLOduration=1.85064596 podStartE2EDuration="4.431310024s" podCreationTimestamp="2025-04-30 01:21:15 +0000 UTC" firstStartedPulling="2025-04-30 01:21:16.416651405 +0000 UTC m=+14.221893325" lastFinishedPulling="2025-04-30 01:21:18.997315429 +0000 UTC m=+16.802557389" observedRunningTime="2025-04-30 01:21:19.431120102 +0000 UTC m=+17.236362182" watchObservedRunningTime="2025-04-30 01:21:19.431310024 +0000 UTC m=+17.236552024" Apr 30 01:21:23.439182 kubelet[2758]: I0430 01:21:23.439056 2758 topology_manager.go:215] "Topology Admit Handler" podUID="1c889d4e-22da-4488-8c67-63b0d69dfb05" podNamespace="calico-system" podName="calico-typha-6859495f98-h9wfg" Apr 30 01:21:23.448888 systemd[1]: Created slice kubepods-besteffort-pod1c889d4e_22da_4488_8c67_63b0d69dfb05.slice - libcontainer container kubepods-besteffort-pod1c889d4e_22da_4488_8c67_63b0d69dfb05.slice. Apr 30 01:21:23.452814 kubelet[2758]: I0430 01:21:23.452777 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbbjz\" (UniqueName: \"kubernetes.io/projected/1c889d4e-22da-4488-8c67-63b0d69dfb05-kube-api-access-wbbjz\") pod \"calico-typha-6859495f98-h9wfg\" (UID: \"1c889d4e-22da-4488-8c67-63b0d69dfb05\") " pod="calico-system/calico-typha-6859495f98-h9wfg" Apr 30 01:21:23.452814 kubelet[2758]: I0430 01:21:23.452817 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c889d4e-22da-4488-8c67-63b0d69dfb05-tigera-ca-bundle\") pod \"calico-typha-6859495f98-h9wfg\" (UID: \"1c889d4e-22da-4488-8c67-63b0d69dfb05\") " pod="calico-system/calico-typha-6859495f98-h9wfg" Apr 30 01:21:23.452964 kubelet[2758]: I0430 01:21:23.452838 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c889d4e-22da-4488-8c67-63b0d69dfb05-typha-certs\") pod \"calico-typha-6859495f98-h9wfg\" (UID: \"1c889d4e-22da-4488-8c67-63b0d69dfb05\") " pod="calico-system/calico-typha-6859495f98-h9wfg" Apr 30 01:21:23.588752 kubelet[2758]: I0430 01:21:23.588444 2758 topology_manager.go:215] "Topology Admit Handler" podUID="2b486429-160a-4f33-b820-3696fb09edcc" podNamespace="calico-system" podName="calico-node-699nh" Apr 30 01:21:23.600544 systemd[1]: Created slice kubepods-besteffort-pod2b486429_160a_4f33_b820_3696fb09edcc.slice - libcontainer container kubepods-besteffort-pod2b486429_160a_4f33_b820_3696fb09edcc.slice. Apr 30 01:21:23.656151 kubelet[2758]: I0430 01:21:23.655815 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b486429-160a-4f33-b820-3696fb09edcc-tigera-ca-bundle\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656151 kubelet[2758]: I0430 01:21:23.655878 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-cni-log-dir\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656151 kubelet[2758]: I0430 01:21:23.655943 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-policysync\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656151 kubelet[2758]: I0430 01:21:23.655963 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-cni-bin-dir\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656151 kubelet[2758]: I0430 01:21:23.655992 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-cni-net-dir\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656439 kubelet[2758]: I0430 01:21:23.656054 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x72t\" (UniqueName: \"kubernetes.io/projected/2b486429-160a-4f33-b820-3696fb09edcc-kube-api-access-6x72t\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656439 kubelet[2758]: I0430 01:21:23.656086 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-lib-modules\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656439 kubelet[2758]: I0430 01:21:23.656104 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-xtables-lock\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656439 kubelet[2758]: I0430 01:21:23.656120 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2b486429-160a-4f33-b820-3696fb09edcc-node-certs\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656896 kubelet[2758]: I0430 01:21:23.656633 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-flexvol-driver-host\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656896 kubelet[2758]: I0430 01:21:23.656793 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-var-run-calico\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.656896 kubelet[2758]: I0430 01:21:23.656815 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b486429-160a-4f33-b820-3696fb09edcc-var-lib-calico\") pod \"calico-node-699nh\" (UID: \"2b486429-160a-4f33-b820-3696fb09edcc\") " pod="calico-system/calico-node-699nh" Apr 30 01:21:23.721557 kubelet[2758]: I0430 01:21:23.719917 2758 topology_manager.go:215] "Topology Admit Handler" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" podNamespace="calico-system" podName="csi-node-driver-7snxl" Apr 30 01:21:23.721557 kubelet[2758]: E0430 01:21:23.720202 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7snxl" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" Apr 30 01:21:23.756037 containerd[1492]: time="2025-04-30T01:21:23.755917064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6859495f98-h9wfg,Uid:1c889d4e-22da-4488-8c67-63b0d69dfb05,Namespace:calico-system,Attempt:0,}" Apr 30 01:21:23.760254 kubelet[2758]: I0430 01:21:23.758739 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c03937bb-3188-4349-9e30-94ddeb810bb2-kubelet-dir\") pod \"csi-node-driver-7snxl\" (UID: \"c03937bb-3188-4349-9e30-94ddeb810bb2\") " pod="calico-system/csi-node-driver-7snxl" Apr 30 01:21:23.760254 kubelet[2758]: I0430 01:21:23.758810 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rdx\" (UniqueName: \"kubernetes.io/projected/c03937bb-3188-4349-9e30-94ddeb810bb2-kube-api-access-n7rdx\") pod \"csi-node-driver-7snxl\" (UID: \"c03937bb-3188-4349-9e30-94ddeb810bb2\") " pod="calico-system/csi-node-driver-7snxl" Apr 30 01:21:23.760254 kubelet[2758]: I0430 01:21:23.758890 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c03937bb-3188-4349-9e30-94ddeb810bb2-socket-dir\") pod \"csi-node-driver-7snxl\" (UID: \"c03937bb-3188-4349-9e30-94ddeb810bb2\") " pod="calico-system/csi-node-driver-7snxl" Apr 30 01:21:23.760254 kubelet[2758]: I0430 01:21:23.758946 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c03937bb-3188-4349-9e30-94ddeb810bb2-varrun\") pod \"csi-node-driver-7snxl\" (UID: \"c03937bb-3188-4349-9e30-94ddeb810bb2\") " pod="calico-system/csi-node-driver-7snxl" Apr 30 01:21:23.760254 kubelet[2758]: I0430 01:21:23.758978 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c03937bb-3188-4349-9e30-94ddeb810bb2-registration-dir\") pod \"csi-node-driver-7snxl\" (UID: \"c03937bb-3188-4349-9e30-94ddeb810bb2\") " pod="calico-system/csi-node-driver-7snxl" Apr 30 01:21:23.766058 kubelet[2758]: E0430 01:21:23.766023 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.766058 kubelet[2758]: W0430 01:21:23.766051 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.766210 kubelet[2758]: E0430 01:21:23.766077 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.771064 kubelet[2758]: E0430 01:21:23.770961 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.771064 kubelet[2758]: W0430 01:21:23.770988 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.771064 kubelet[2758]: E0430 01:21:23.771009 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.784769 kubelet[2758]: E0430 01:21:23.784136 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.784769 kubelet[2758]: W0430 01:21:23.784163 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.784769 kubelet[2758]: E0430 01:21:23.784196 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.797853 containerd[1492]: time="2025-04-30T01:21:23.797734205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:23.797853 containerd[1492]: time="2025-04-30T01:21:23.797802806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:23.797853 containerd[1492]: time="2025-04-30T01:21:23.797813966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:23.798250 containerd[1492]: time="2025-04-30T01:21:23.797901607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:23.830677 systemd[1]: Started cri-containerd-89efbe2e49f01e06ff3e816907305d9f8053e0287920c8e16adebe174ce605e7.scope - libcontainer container 89efbe2e49f01e06ff3e816907305d9f8053e0287920c8e16adebe174ce605e7. Apr 30 01:21:23.859882 kubelet[2758]: E0430 01:21:23.859807 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.859882 kubelet[2758]: W0430 01:21:23.859831 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.859882 kubelet[2758]: E0430 01:21:23.859852 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.860533 kubelet[2758]: E0430 01:21:23.860505 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.860533 kubelet[2758]: W0430 01:21:23.860522 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.860533 kubelet[2758]: E0430 01:21:23.860539 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.860976 kubelet[2758]: E0430 01:21:23.860753 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.860976 kubelet[2758]: W0430 01:21:23.860762 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.860976 kubelet[2758]: E0430 01:21:23.860779 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.861619 kubelet[2758]: E0430 01:21:23.861471 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.861619 kubelet[2758]: W0430 01:21:23.861491 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.861619 kubelet[2758]: E0430 01:21:23.861537 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.862077 kubelet[2758]: E0430 01:21:23.862063 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.862240 kubelet[2758]: W0430 01:21:23.862133 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.862240 kubelet[2758]: E0430 01:21:23.862149 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.862502 kubelet[2758]: E0430 01:21:23.862472 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.862502 kubelet[2758]: W0430 01:21:23.862485 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.862867 kubelet[2758]: E0430 01:21:23.862799 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.863068 kubelet[2758]: E0430 01:21:23.863003 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.863068 kubelet[2758]: W0430 01:21:23.863014 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.863068 kubelet[2758]: E0430 01:21:23.863051 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.863420 kubelet[2758]: E0430 01:21:23.863314 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.863420 kubelet[2758]: W0430 01:21:23.863339 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.863512 kubelet[2758]: E0430 01:21:23.863421 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.863821 kubelet[2758]: E0430 01:21:23.863792 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.864115 kubelet[2758]: W0430 01:21:23.863968 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.864115 kubelet[2758]: E0430 01:21:23.864015 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.864525 kubelet[2758]: E0430 01:21:23.864392 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.864525 kubelet[2758]: W0430 01:21:23.864409 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.864525 kubelet[2758]: E0430 01:21:23.864465 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.864799 kubelet[2758]: E0430 01:21:23.864763 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.864799 kubelet[2758]: W0430 01:21:23.864777 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.864989 kubelet[2758]: E0430 01:21:23.864965 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.865198 kubelet[2758]: E0430 01:21:23.865186 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.865317 kubelet[2758]: W0430 01:21:23.865267 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.865317 kubelet[2758]: E0430 01:21:23.865300 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.865860 kubelet[2758]: E0430 01:21:23.865738 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.865860 kubelet[2758]: W0430 01:21:23.865757 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.865860 kubelet[2758]: E0430 01:21:23.865790 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.866103 kubelet[2758]: E0430 01:21:23.866038 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.866103 kubelet[2758]: W0430 01:21:23.866050 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.866103 kubelet[2758]: E0430 01:21:23.866083 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.866859 kubelet[2758]: E0430 01:21:23.866763 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.866859 kubelet[2758]: W0430 01:21:23.866780 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.866859 kubelet[2758]: E0430 01:21:23.866808 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.867148 kubelet[2758]: E0430 01:21:23.867057 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.867148 kubelet[2758]: W0430 01:21:23.867072 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.867148 kubelet[2758]: E0430 01:21:23.867102 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.867380 kubelet[2758]: E0430 01:21:23.867307 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.867380 kubelet[2758]: W0430 01:21:23.867320 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.867380 kubelet[2758]: E0430 01:21:23.867347 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.867986 kubelet[2758]: E0430 01:21:23.867875 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.867986 kubelet[2758]: W0430 01:21:23.867892 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.867986 kubelet[2758]: E0430 01:21:23.867962 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.869008 kubelet[2758]: E0430 01:21:23.868987 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.869155 kubelet[2758]: W0430 01:21:23.869084 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.869155 kubelet[2758]: E0430 01:21:23.869119 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.869332 kubelet[2758]: E0430 01:21:23.869308 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.869332 kubelet[2758]: W0430 01:21:23.869322 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.869424 kubelet[2758]: E0430 01:21:23.869337 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.869757 kubelet[2758]: E0430 01:21:23.869735 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.869757 kubelet[2758]: W0430 01:21:23.869753 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.870060 kubelet[2758]: E0430 01:21:23.869837 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.870060 kubelet[2758]: E0430 01:21:23.869978 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.870060 kubelet[2758]: W0430 01:21:23.869989 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.870611 kubelet[2758]: E0430 01:21:23.870379 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.870611 kubelet[2758]: E0430 01:21:23.870417 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.870611 kubelet[2758]: W0430 01:21:23.870427 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.870611 kubelet[2758]: E0430 01:21:23.870611 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.871016 kubelet[2758]: W0430 01:21:23.870624 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.871016 kubelet[2758]: E0430 01:21:23.870635 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.871016 kubelet[2758]: E0430 01:21:23.870785 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.871841 kubelet[2758]: E0430 01:21:23.871582 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.871841 kubelet[2758]: W0430 01:21:23.871600 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.871841 kubelet[2758]: E0430 01:21:23.871617 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.887156 kubelet[2758]: E0430 01:21:23.887069 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:23.887156 kubelet[2758]: W0430 01:21:23.887094 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:23.887156 kubelet[2758]: E0430 01:21:23.887115 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:23.907646 containerd[1492]: time="2025-04-30T01:21:23.907611572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-699nh,Uid:2b486429-160a-4f33-b820-3696fb09edcc,Namespace:calico-system,Attempt:0,}" Apr 30 01:21:23.910116 containerd[1492]: time="2025-04-30T01:21:23.910036526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6859495f98-h9wfg,Uid:1c889d4e-22da-4488-8c67-63b0d69dfb05,Namespace:calico-system,Attempt:0,} returns sandbox id \"89efbe2e49f01e06ff3e816907305d9f8053e0287920c8e16adebe174ce605e7\"" Apr 30 01:21:23.915030 containerd[1492]: time="2025-04-30T01:21:23.914916074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 01:21:23.947968 containerd[1492]: time="2025-04-30T01:21:23.947293804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:23.947968 containerd[1492]: time="2025-04-30T01:21:23.947356605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:23.947968 containerd[1492]: time="2025-04-30T01:21:23.947387885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:23.947968 containerd[1492]: time="2025-04-30T01:21:23.947480806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:23.966002 systemd[1]: Started cri-containerd-ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9.scope - libcontainer container ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9. Apr 30 01:21:24.002440 containerd[1492]: time="2025-04-30T01:21:24.002207768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-699nh,Uid:2b486429-160a-4f33-b820-3696fb09edcc,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9\"" Apr 30 01:21:25.345647 kubelet[2758]: E0430 01:21:25.345222 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7snxl" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" Apr 30 01:21:26.191048 containerd[1492]: time="2025-04-30T01:21:26.190980428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:26.191948 containerd[1492]: time="2025-04-30T01:21:26.191902203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 01:21:26.193737 containerd[1492]: time="2025-04-30T01:21:26.192995221Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:26.196506 containerd[1492]: time="2025-04-30T01:21:26.195656944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:26.196506 containerd[1492]: time="2025-04-30T01:21:26.196321794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.28136904s" Apr 30 01:21:26.196506 containerd[1492]: time="2025-04-30T01:21:26.196350115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 01:21:26.199863 containerd[1492]: time="2025-04-30T01:21:26.199831491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 01:21:26.218507 containerd[1492]: time="2025-04-30T01:21:26.218451551Z" level=info msg="CreateContainer within sandbox \"89efbe2e49f01e06ff3e816907305d9f8053e0287920c8e16adebe174ce605e7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 01:21:26.240783 containerd[1492]: time="2025-04-30T01:21:26.240731070Z" level=info msg="CreateContainer within sandbox \"89efbe2e49f01e06ff3e816907305d9f8053e0287920c8e16adebe174ce605e7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a469501c593f361a4a1d2f653451c359bd49ba942822e158fde5be44f4a0dc4a\"" Apr 30 01:21:26.241644 containerd[1492]: time="2025-04-30T01:21:26.241580204Z" level=info msg="StartContainer for \"a469501c593f361a4a1d2f653451c359bd49ba942822e158fde5be44f4a0dc4a\"" Apr 30 01:21:26.275331 systemd[1]: Started cri-containerd-a469501c593f361a4a1d2f653451c359bd49ba942822e158fde5be44f4a0dc4a.scope - libcontainer container a469501c593f361a4a1d2f653451c359bd49ba942822e158fde5be44f4a0dc4a. Apr 30 01:21:26.327233 containerd[1492]: time="2025-04-30T01:21:26.327132222Z" level=info msg="StartContainer for \"a469501c593f361a4a1d2f653451c359bd49ba942822e158fde5be44f4a0dc4a\" returns successfully" Apr 30 01:21:26.466655 kubelet[2758]: E0430 01:21:26.466526 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.466655 kubelet[2758]: W0430 01:21:26.466559 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.466655 kubelet[2758]: E0430 01:21:26.466602 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.467413 kubelet[2758]: E0430 01:21:26.466858 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.467413 kubelet[2758]: W0430 01:21:26.466868 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.467413 kubelet[2758]: E0430 01:21:26.466878 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.467413 kubelet[2758]: E0430 01:21:26.467070 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.467413 kubelet[2758]: W0430 01:21:26.467079 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.467413 kubelet[2758]: E0430 01:21:26.467092 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.467413 kubelet[2758]: E0430 01:21:26.467380 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.467570 kubelet[2758]: W0430 01:21:26.467444 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.467570 kubelet[2758]: E0430 01:21:26.467458 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.468901 kubelet[2758]: E0430 01:21:26.467689 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.468901 kubelet[2758]: W0430 01:21:26.467718 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.468901 kubelet[2758]: E0430 01:21:26.467729 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.468901 kubelet[2758]: E0430 01:21:26.468151 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.468901 kubelet[2758]: W0430 01:21:26.468162 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.468901 kubelet[2758]: E0430 01:21:26.468173 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.469148 kubelet[2758]: E0430 01:21:26.468928 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.469148 kubelet[2758]: W0430 01:21:26.468941 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.469148 kubelet[2758]: E0430 01:21:26.468953 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.469445 kubelet[2758]: E0430 01:21:26.469380 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.469445 kubelet[2758]: W0430 01:21:26.469423 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.469445 kubelet[2758]: E0430 01:21:26.469436 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.469844 kubelet[2758]: E0430 01:21:26.469815 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.469844 kubelet[2758]: W0430 01:21:26.469829 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.471158 kubelet[2758]: E0430 01:21:26.471122 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.473177 kubelet[2758]: E0430 01:21:26.473017 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.473177 kubelet[2758]: W0430 01:21:26.473163 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.473361 kubelet[2758]: E0430 01:21:26.473185 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.473943 kubelet[2758]: E0430 01:21:26.473914 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.473943 kubelet[2758]: W0430 01:21:26.473936 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.473943 kubelet[2758]: E0430 01:21:26.473951 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.474737 kubelet[2758]: E0430 01:21:26.474692 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.474737 kubelet[2758]: W0430 01:21:26.474725 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.474737 kubelet[2758]: E0430 01:21:26.474739 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.476188 kubelet[2758]: E0430 01:21:26.476011 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.476188 kubelet[2758]: W0430 01:21:26.476141 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.476188 kubelet[2758]: E0430 01:21:26.476156 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.477218 kubelet[2758]: E0430 01:21:26.477178 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.477218 kubelet[2758]: W0430 01:21:26.477215 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.477218 kubelet[2758]: E0430 01:21:26.477230 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.478002 kubelet[2758]: E0430 01:21:26.477981 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.478002 kubelet[2758]: W0430 01:21:26.477997 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.478212 kubelet[2758]: E0430 01:21:26.478010 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.486871 kubelet[2758]: E0430 01:21:26.485797 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.486871 kubelet[2758]: W0430 01:21:26.485831 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.486871 kubelet[2758]: E0430 01:21:26.485855 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.487177 kubelet[2758]: E0430 01:21:26.487155 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.487269 kubelet[2758]: W0430 01:21:26.487247 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.487489 kubelet[2758]: E0430 01:21:26.487339 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.488730 kubelet[2758]: E0430 01:21:26.488357 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.488730 kubelet[2758]: W0430 01:21:26.488375 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.489279 kubelet[2758]: E0430 01:21:26.489248 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.490946 kubelet[2758]: E0430 01:21:26.490792 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.490946 kubelet[2758]: W0430 01:21:26.490816 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.491060 kubelet[2758]: E0430 01:21:26.490947 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.491680 kubelet[2758]: E0430 01:21:26.491660 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.491832 kubelet[2758]: W0430 01:21:26.491794 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.492124 kubelet[2758]: E0430 01:21:26.491896 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.492734 kubelet[2758]: E0430 01:21:26.492260 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.492734 kubelet[2758]: W0430 01:21:26.492276 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.492734 kubelet[2758]: E0430 01:21:26.492305 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.492984 kubelet[2758]: E0430 01:21:26.492965 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.493122 kubelet[2758]: W0430 01:21:26.493033 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.493122 kubelet[2758]: E0430 01:21:26.493063 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.493802 kubelet[2758]: E0430 01:21:26.493785 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.493863 kubelet[2758]: W0430 01:21:26.493851 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.493943 kubelet[2758]: E0430 01:21:26.493919 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.495492 kubelet[2758]: E0430 01:21:26.494307 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.495492 kubelet[2758]: W0430 01:21:26.494321 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.495492 kubelet[2758]: E0430 01:21:26.494345 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.495815 kubelet[2758]: E0430 01:21:26.495796 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.495880 kubelet[2758]: W0430 01:21:26.495867 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.496549 kubelet[2758]: E0430 01:21:26.496521 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.497195 kubelet[2758]: E0430 01:21:26.497085 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.497195 kubelet[2758]: W0430 01:21:26.497101 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.497195 kubelet[2758]: E0430 01:21:26.497155 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.497511 kubelet[2758]: E0430 01:21:26.497369 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.497511 kubelet[2758]: W0430 01:21:26.497384 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.497511 kubelet[2758]: E0430 01:21:26.497455 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.498088 kubelet[2758]: E0430 01:21:26.498068 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.498170 kubelet[2758]: W0430 01:21:26.498157 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.498615 kubelet[2758]: E0430 01:21:26.498233 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.499768 kubelet[2758]: E0430 01:21:26.498567 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.499768 kubelet[2758]: W0430 01:21:26.498967 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.499768 kubelet[2758]: E0430 01:21:26.498985 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.499989 kubelet[2758]: E0430 01:21:26.499967 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.500031 kubelet[2758]: W0430 01:21:26.499989 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.500213 kubelet[2758]: E0430 01:21:26.500081 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.500605 kubelet[2758]: E0430 01:21:26.500585 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.500605 kubelet[2758]: W0430 01:21:26.500607 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.501201 kubelet[2758]: E0430 01:21:26.501056 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.501201 kubelet[2758]: E0430 01:21:26.501132 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.501201 kubelet[2758]: W0430 01:21:26.501146 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.501201 kubelet[2758]: E0430 01:21:26.501159 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:26.502716 kubelet[2758]: E0430 01:21:26.502672 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:26.502854 kubelet[2758]: W0430 01:21:26.502705 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:26.502854 kubelet[2758]: E0430 01:21:26.502776 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.347207 kubelet[2758]: E0430 01:21:27.345790 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7snxl" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" Apr 30 01:21:27.455977 kubelet[2758]: I0430 01:21:27.455298 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 01:21:27.484224 kubelet[2758]: E0430 01:21:27.484176 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.484939 kubelet[2758]: W0430 01:21:27.484794 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.484939 kubelet[2758]: E0430 01:21:27.484838 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.485774 kubelet[2758]: E0430 01:21:27.485366 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.485774 kubelet[2758]: W0430 01:21:27.485387 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.485774 kubelet[2758]: E0430 01:21:27.485419 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.486237 kubelet[2758]: E0430 01:21:27.485782 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.486237 kubelet[2758]: W0430 01:21:27.485799 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.486237 kubelet[2758]: E0430 01:21:27.485816 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.486577 kubelet[2758]: E0430 01:21:27.486394 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.486577 kubelet[2758]: W0430 01:21:27.486428 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.486577 kubelet[2758]: E0430 01:21:27.486454 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.486919 kubelet[2758]: E0430 01:21:27.486762 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.486919 kubelet[2758]: W0430 01:21:27.486772 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.486919 kubelet[2758]: E0430 01:21:27.486781 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.487128 kubelet[2758]: E0430 01:21:27.487073 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.487128 kubelet[2758]: W0430 01:21:27.487092 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.487128 kubelet[2758]: E0430 01:21:27.487105 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.487364 kubelet[2758]: E0430 01:21:27.487351 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.487364 kubelet[2758]: W0430 01:21:27.487363 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.487536 kubelet[2758]: E0430 01:21:27.487372 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.487693 kubelet[2758]: E0430 01:21:27.487678 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.487693 kubelet[2758]: W0430 01:21:27.487691 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.487785 kubelet[2758]: E0430 01:21:27.487701 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.487981 kubelet[2758]: E0430 01:21:27.487966 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.487981 kubelet[2758]: W0430 01:21:27.487980 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.488144 kubelet[2758]: E0430 01:21:27.487989 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.488665 kubelet[2758]: E0430 01:21:27.488642 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.488665 kubelet[2758]: W0430 01:21:27.488662 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.488771 kubelet[2758]: E0430 01:21:27.488675 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.491022 kubelet[2758]: E0430 01:21:27.490812 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.491022 kubelet[2758]: W0430 01:21:27.490841 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.491022 kubelet[2758]: E0430 01:21:27.490866 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.492475 kubelet[2758]: E0430 01:21:27.492338 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.492475 kubelet[2758]: W0430 01:21:27.492352 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.492475 kubelet[2758]: E0430 01:21:27.492365 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.492794 kubelet[2758]: E0430 01:21:27.492781 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.493005 kubelet[2758]: W0430 01:21:27.492989 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.493192 kubelet[2758]: E0430 01:21:27.493073 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.493382 kubelet[2758]: E0430 01:21:27.493370 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.493665 kubelet[2758]: W0430 01:21:27.493467 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.493665 kubelet[2758]: E0430 01:21:27.493485 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.494603 kubelet[2758]: E0430 01:21:27.494458 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.494603 kubelet[2758]: W0430 01:21:27.494473 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.494603 kubelet[2758]: E0430 01:21:27.494485 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.497333 kubelet[2758]: E0430 01:21:27.497315 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.497700 kubelet[2758]: W0430 01:21:27.497516 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.497948 kubelet[2758]: E0430 01:21:27.497911 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.500316 kubelet[2758]: E0430 01:21:27.500274 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.500316 kubelet[2758]: W0430 01:21:27.500295 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.500316 kubelet[2758]: E0430 01:21:27.500319 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.500636 kubelet[2758]: E0430 01:21:27.500612 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.500636 kubelet[2758]: W0430 01:21:27.500629 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.500734 kubelet[2758]: E0430 01:21:27.500698 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.500998 kubelet[2758]: E0430 01:21:27.500979 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.500998 kubelet[2758]: W0430 01:21:27.500992 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.501107 kubelet[2758]: E0430 01:21:27.501073 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.501248 kubelet[2758]: E0430 01:21:27.501227 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.501248 kubelet[2758]: W0430 01:21:27.501244 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.501309 kubelet[2758]: E0430 01:21:27.501259 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.501489 kubelet[2758]: E0430 01:21:27.501474 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.501489 kubelet[2758]: W0430 01:21:27.501488 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.501664 kubelet[2758]: E0430 01:21:27.501507 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.501764 kubelet[2758]: E0430 01:21:27.501752 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.501764 kubelet[2758]: W0430 01:21:27.501763 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.501820 kubelet[2758]: E0430 01:21:27.501776 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.502003 kubelet[2758]: E0430 01:21:27.501986 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.502003 kubelet[2758]: W0430 01:21:27.501999 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.502079 kubelet[2758]: E0430 01:21:27.502059 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.502327 kubelet[2758]: E0430 01:21:27.502309 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.502327 kubelet[2758]: W0430 01:21:27.502323 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.502700 kubelet[2758]: E0430 01:21:27.502436 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.502909 kubelet[2758]: E0430 01:21:27.502864 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.502909 kubelet[2758]: W0430 01:21:27.502885 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.503029 kubelet[2758]: E0430 01:21:27.502987 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.503415 kubelet[2758]: E0430 01:21:27.503286 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.503472 kubelet[2758]: W0430 01:21:27.503413 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.503496 kubelet[2758]: E0430 01:21:27.503478 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.503963 kubelet[2758]: E0430 01:21:27.503940 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.504007 kubelet[2758]: W0430 01:21:27.503967 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.504236 kubelet[2758]: E0430 01:21:27.504005 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.505568 kubelet[2758]: E0430 01:21:27.504856 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.505658 kubelet[2758]: W0430 01:21:27.505579 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.505658 kubelet[2758]: E0430 01:21:27.505621 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.506505 kubelet[2758]: E0430 01:21:27.506484 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.506505 kubelet[2758]: W0430 01:21:27.506501 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.506603 kubelet[2758]: E0430 01:21:27.506517 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.507120 kubelet[2758]: E0430 01:21:27.506956 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.507120 kubelet[2758]: W0430 01:21:27.506974 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.507120 kubelet[2758]: E0430 01:21:27.506990 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.507755 kubelet[2758]: E0430 01:21:27.507585 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.507755 kubelet[2758]: W0430 01:21:27.507605 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.507755 kubelet[2758]: E0430 01:21:27.507627 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.508261 kubelet[2758]: E0430 01:21:27.508244 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.508261 kubelet[2758]: W0430 01:21:27.508260 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.509142 kubelet[2758]: E0430 01:21:27.509116 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.509365 kubelet[2758]: E0430 01:21:27.509352 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:27.509429 kubelet[2758]: W0430 01:21:27.509365 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:27.509429 kubelet[2758]: E0430 01:21:27.509379 2758 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:27.827152 containerd[1492]: time="2025-04-30T01:21:27.827075162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:27.828697 containerd[1492]: time="2025-04-30T01:21:27.828579507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 01:21:27.830334 containerd[1492]: time="2025-04-30T01:21:27.830266096Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:27.833086 containerd[1492]: time="2025-04-30T01:21:27.833036182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:27.834234 containerd[1492]: time="2025-04-30T01:21:27.833662713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.633627658s" Apr 30 01:21:27.834234 containerd[1492]: time="2025-04-30T01:21:27.833786235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 01:21:27.837226 containerd[1492]: time="2025-04-30T01:21:27.837173252Z" level=info msg="CreateContainer within sandbox \"ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 01:21:27.861134 containerd[1492]: time="2025-04-30T01:21:27.861088413Z" level=info msg="CreateContainer within sandbox \"ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517\"" Apr 30 01:21:27.862967 containerd[1492]: time="2025-04-30T01:21:27.862844163Z" level=info msg="StartContainer for \"ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517\"" Apr 30 01:21:27.901847 systemd[1]: run-containerd-runc-k8s.io-ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517-runc.VPKNbl.mount: Deactivated successfully. Apr 30 01:21:27.914064 systemd[1]: Started cri-containerd-ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517.scope - libcontainer container ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517. Apr 30 01:21:27.947869 containerd[1492]: time="2025-04-30T01:21:27.947183780Z" level=info msg="StartContainer for \"ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517\" returns successfully" Apr 30 01:21:27.978106 systemd[1]: cri-containerd-ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517.scope: Deactivated successfully. Apr 30 01:21:28.011017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517-rootfs.mount: Deactivated successfully. Apr 30 01:21:28.102694 containerd[1492]: time="2025-04-30T01:21:28.102542739Z" level=info msg="shim disconnected" id=ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517 namespace=k8s.io Apr 30 01:21:28.102694 containerd[1492]: time="2025-04-30T01:21:28.102608861Z" level=warning msg="cleaning up after shim disconnected" id=ff2025fe006c40b3f337a1e817a529e8e5d2b2da2e831e0e0709f3247fd0c517 namespace=k8s.io Apr 30 01:21:28.102694 containerd[1492]: time="2025-04-30T01:21:28.102619821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:21:28.462807 containerd[1492]: time="2025-04-30T01:21:28.462035782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 01:21:28.482186 kubelet[2758]: I0430 01:21:28.482076 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6859495f98-h9wfg" podStartSLOduration=3.197970089 podStartE2EDuration="5.482047052s" podCreationTimestamp="2025-04-30 01:21:23 +0000 UTC" firstStartedPulling="2025-04-30 01:21:23.914553149 +0000 UTC m=+21.719795109" lastFinishedPulling="2025-04-30 01:21:26.198630152 +0000 UTC m=+24.003872072" observedRunningTime="2025-04-30 01:21:26.472158199 +0000 UTC m=+24.277400159" watchObservedRunningTime="2025-04-30 01:21:28.482047052 +0000 UTC m=+26.287289012" Apr 30 01:21:29.345516 kubelet[2758]: E0430 01:21:29.345414 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7snxl" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" Apr 30 01:21:30.931618 kubelet[2758]: I0430 01:21:30.930476 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 01:21:31.345771 kubelet[2758]: E0430 01:21:31.345606 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7snxl" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" Apr 30 01:21:33.036633 containerd[1492]: time="2025-04-30T01:21:33.036473095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:33.037890 containerd[1492]: time="2025-04-30T01:21:33.037846603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 01:21:33.038732 containerd[1492]: time="2025-04-30T01:21:33.038525377Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:33.041779 containerd[1492]: time="2025-04-30T01:21:33.041700682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:33.042418 containerd[1492]: time="2025-04-30T01:21:33.042381216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.580305953s" Apr 30 01:21:33.042418 containerd[1492]: time="2025-04-30T01:21:33.042417057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 01:21:33.047977 containerd[1492]: time="2025-04-30T01:21:33.047926570Z" level=info msg="CreateContainer within sandbox \"ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 01:21:33.070133 containerd[1492]: time="2025-04-30T01:21:33.070043144Z" level=info msg="CreateContainer within sandbox \"ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967\"" Apr 30 01:21:33.070884 containerd[1492]: time="2025-04-30T01:21:33.070804080Z" level=info msg="StartContainer for \"b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967\"" Apr 30 01:21:33.111018 systemd[1]: Started cri-containerd-b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967.scope - libcontainer container b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967. Apr 30 01:21:33.147660 containerd[1492]: time="2025-04-30T01:21:33.147473494Z" level=info msg="StartContainer for \"b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967\" returns successfully" Apr 30 01:21:33.345236 kubelet[2758]: E0430 01:21:33.345087 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7snxl" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" Apr 30 01:21:33.703162 containerd[1492]: time="2025-04-30T01:21:33.703114181Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 01:21:33.705987 systemd[1]: cri-containerd-b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967.scope: Deactivated successfully. Apr 30 01:21:33.768453 kubelet[2758]: I0430 01:21:33.767027 2758 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 01:21:33.802357 containerd[1492]: time="2025-04-30T01:21:33.802071972Z" level=info msg="shim disconnected" id=b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967 namespace=k8s.io Apr 30 01:21:33.802357 containerd[1492]: time="2025-04-30T01:21:33.802127573Z" level=warning msg="cleaning up after shim disconnected" id=b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967 namespace=k8s.io Apr 30 01:21:33.802357 containerd[1492]: time="2025-04-30T01:21:33.802136173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:21:33.825907 kubelet[2758]: I0430 01:21:33.824887 2758 topology_manager.go:215] "Topology Admit Handler" podUID="1d8395dc-833d-4a09-97bb-4b7ca67c4458" podNamespace="calico-apiserver" podName="calico-apiserver-7845cbd476-bf752" Apr 30 01:21:33.829410 kubelet[2758]: I0430 01:21:33.828674 2758 topology_manager.go:215] "Topology Admit Handler" podUID="989be597-1452-421c-833d-fc10aad2a4c3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5hdsg" Apr 30 01:21:33.833473 kubelet[2758]: I0430 01:21:33.833199 2758 topology_manager.go:215] "Topology Admit Handler" podUID="94b1a086-40fc-41bb-8514-0b3e4bfe8cc0" podNamespace="calico-system" podName="calico-kube-controllers-65bb9bf8f8-mz854" Apr 30 01:21:33.835598 kubelet[2758]: I0430 01:21:33.835400 2758 topology_manager.go:215] "Topology Admit Handler" podUID="73a62f7b-09e4-4b19-a621-c45cfcdd6957" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kk96k" Apr 30 01:21:33.836042 kubelet[2758]: I0430 01:21:33.835987 2758 topology_manager.go:215] "Topology Admit Handler" podUID="c0c262a3-b472-411f-9366-fa54ff571684" podNamespace="calico-apiserver" podName="calico-apiserver-7845cbd476-mzk7k" Apr 30 01:21:33.844856 systemd[1]: Created slice kubepods-besteffort-pod1d8395dc_833d_4a09_97bb_4b7ca67c4458.slice - libcontainer container kubepods-besteffort-pod1d8395dc_833d_4a09_97bb_4b7ca67c4458.slice. Apr 30 01:21:33.856088 systemd[1]: Created slice kubepods-burstable-pod989be597_1452_421c_833d_fc10aad2a4c3.slice - libcontainer container kubepods-burstable-pod989be597_1452_421c_833d_fc10aad2a4c3.slice. Apr 30 01:21:33.873531 systemd[1]: Created slice kubepods-besteffort-pod94b1a086_40fc_41bb_8514_0b3e4bfe8cc0.slice - libcontainer container kubepods-besteffort-pod94b1a086_40fc_41bb_8514_0b3e4bfe8cc0.slice. Apr 30 01:21:33.883944 systemd[1]: Created slice kubepods-burstable-pod73a62f7b_09e4_4b19_a621_c45cfcdd6957.slice - libcontainer container kubepods-burstable-pod73a62f7b_09e4_4b19_a621_c45cfcdd6957.slice. Apr 30 01:21:33.895703 systemd[1]: Created slice kubepods-besteffort-podc0c262a3_b472_411f_9366_fa54ff571684.slice - libcontainer container kubepods-besteffort-podc0c262a3_b472_411f_9366_fa54ff571684.slice. Apr 30 01:21:33.943635 kubelet[2758]: I0430 01:21:33.943571 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1d8395dc-833d-4a09-97bb-4b7ca67c4458-calico-apiserver-certs\") pod \"calico-apiserver-7845cbd476-bf752\" (UID: \"1d8395dc-833d-4a09-97bb-4b7ca67c4458\") " pod="calico-apiserver/calico-apiserver-7845cbd476-bf752" Apr 30 01:21:33.943635 kubelet[2758]: I0430 01:21:33.943637 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a62f7b-09e4-4b19-a621-c45cfcdd6957-config-volume\") pod \"coredns-7db6d8ff4d-kk96k\" (UID: \"73a62f7b-09e4-4b19-a621-c45cfcdd6957\") " pod="kube-system/coredns-7db6d8ff4d-kk96k" Apr 30 01:21:33.943844 kubelet[2758]: I0430 01:21:33.943668 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94b1a086-40fc-41bb-8514-0b3e4bfe8cc0-tigera-ca-bundle\") pod \"calico-kube-controllers-65bb9bf8f8-mz854\" (UID: \"94b1a086-40fc-41bb-8514-0b3e4bfe8cc0\") " pod="calico-system/calico-kube-controllers-65bb9bf8f8-mz854" Apr 30 01:21:33.943844 kubelet[2758]: I0430 01:21:33.943693 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-786qg\" (UniqueName: \"kubernetes.io/projected/1d8395dc-833d-4a09-97bb-4b7ca67c4458-kube-api-access-786qg\") pod \"calico-apiserver-7845cbd476-bf752\" (UID: \"1d8395dc-833d-4a09-97bb-4b7ca67c4458\") " pod="calico-apiserver/calico-apiserver-7845cbd476-bf752" Apr 30 01:21:33.943844 kubelet[2758]: I0430 01:21:33.943739 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/989be597-1452-421c-833d-fc10aad2a4c3-config-volume\") pod \"coredns-7db6d8ff4d-5hdsg\" (UID: \"989be597-1452-421c-833d-fc10aad2a4c3\") " pod="kube-system/coredns-7db6d8ff4d-5hdsg" Apr 30 01:21:33.943844 kubelet[2758]: I0430 01:21:33.943774 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsqss\" (UniqueName: \"kubernetes.io/projected/989be597-1452-421c-833d-fc10aad2a4c3-kube-api-access-xsqss\") pod \"coredns-7db6d8ff4d-5hdsg\" (UID: \"989be597-1452-421c-833d-fc10aad2a4c3\") " pod="kube-system/coredns-7db6d8ff4d-5hdsg" Apr 30 01:21:33.943844 kubelet[2758]: I0430 01:21:33.943801 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5vq\" (UniqueName: \"kubernetes.io/projected/73a62f7b-09e4-4b19-a621-c45cfcdd6957-kube-api-access-qr5vq\") pod \"coredns-7db6d8ff4d-kk96k\" (UID: \"73a62f7b-09e4-4b19-a621-c45cfcdd6957\") " pod="kube-system/coredns-7db6d8ff4d-kk96k" Apr 30 01:21:33.943996 kubelet[2758]: I0430 01:21:33.943826 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c0c262a3-b472-411f-9366-fa54ff571684-calico-apiserver-certs\") pod \"calico-apiserver-7845cbd476-mzk7k\" (UID: \"c0c262a3-b472-411f-9366-fa54ff571684\") " pod="calico-apiserver/calico-apiserver-7845cbd476-mzk7k" Apr 30 01:21:33.943996 kubelet[2758]: I0430 01:21:33.943849 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8w9d\" (UniqueName: \"kubernetes.io/projected/c0c262a3-b472-411f-9366-fa54ff571684-kube-api-access-n8w9d\") pod \"calico-apiserver-7845cbd476-mzk7k\" (UID: \"c0c262a3-b472-411f-9366-fa54ff571684\") " pod="calico-apiserver/calico-apiserver-7845cbd476-mzk7k" Apr 30 01:21:33.943996 kubelet[2758]: I0430 01:21:33.943885 2758 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6pc4\" (UniqueName: \"kubernetes.io/projected/94b1a086-40fc-41bb-8514-0b3e4bfe8cc0-kube-api-access-r6pc4\") pod \"calico-kube-controllers-65bb9bf8f8-mz854\" (UID: \"94b1a086-40fc-41bb-8514-0b3e4bfe8cc0\") " pod="calico-system/calico-kube-controllers-65bb9bf8f8-mz854" Apr 30 01:21:34.068704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8a737d8adc4d66999b52e305aa95b9e8a1ba03b1c604690fbe4d801d2c6e967-rootfs.mount: Deactivated successfully. Apr 30 01:21:34.151101 containerd[1492]: time="2025-04-30T01:21:34.151049740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7845cbd476-bf752,Uid:1d8395dc-833d-4a09-97bb-4b7ca67c4458,Namespace:calico-apiserver,Attempt:0,}" Apr 30 01:21:34.167775 containerd[1492]: time="2025-04-30T01:21:34.166465745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hdsg,Uid:989be597-1452-421c-833d-fc10aad2a4c3,Namespace:kube-system,Attempt:0,}" Apr 30 01:21:34.190887 containerd[1492]: time="2025-04-30T01:21:34.190833499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65bb9bf8f8-mz854,Uid:94b1a086-40fc-41bb-8514-0b3e4bfe8cc0,Namespace:calico-system,Attempt:0,}" Apr 30 01:21:34.193893 containerd[1492]: time="2025-04-30T01:21:34.193651718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kk96k,Uid:73a62f7b-09e4-4b19-a621-c45cfcdd6957,Namespace:kube-system,Attempt:0,}" Apr 30 01:21:34.201406 containerd[1492]: time="2025-04-30T01:21:34.200846790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7845cbd476-mzk7k,Uid:c0c262a3-b472-411f-9366-fa54ff571684,Namespace:calico-apiserver,Attempt:0,}" Apr 30 01:21:34.299742 containerd[1492]: time="2025-04-30T01:21:34.299676114Z" level=error msg="Failed to destroy network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.300289 containerd[1492]: time="2025-04-30T01:21:34.300259606Z" level=error msg="encountered an error cleaning up failed sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.300444 containerd[1492]: time="2025-04-30T01:21:34.300418569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7845cbd476-bf752,Uid:1d8395dc-833d-4a09-97bb-4b7ca67c4458,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.301989 kubelet[2758]: E0430 01:21:34.301126 2758 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.301989 kubelet[2758]: E0430 01:21:34.301591 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7845cbd476-bf752" Apr 30 01:21:34.301989 kubelet[2758]: E0430 01:21:34.301614 2758 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7845cbd476-bf752" Apr 30 01:21:34.302821 kubelet[2758]: E0430 01:21:34.301667 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7845cbd476-bf752_calico-apiserver(1d8395dc-833d-4a09-97bb-4b7ca67c4458)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7845cbd476-bf752_calico-apiserver(1d8395dc-833d-4a09-97bb-4b7ca67c4458)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7845cbd476-bf752" podUID="1d8395dc-833d-4a09-97bb-4b7ca67c4458" Apr 30 01:21:34.363153 containerd[1492]: time="2025-04-30T01:21:34.362302634Z" level=error msg="Failed to destroy network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.363153 containerd[1492]: time="2025-04-30T01:21:34.362963488Z" level=error msg="encountered an error cleaning up failed sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.363153 containerd[1492]: time="2025-04-30T01:21:34.363030009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hdsg,Uid:989be597-1452-421c-833d-fc10aad2a4c3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.363880 kubelet[2758]: E0430 01:21:34.363577 2758 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.363880 kubelet[2758]: E0430 01:21:34.363640 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5hdsg" Apr 30 01:21:34.363880 kubelet[2758]: E0430 01:21:34.363658 2758 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5hdsg" Apr 30 01:21:34.365032 kubelet[2758]: E0430 01:21:34.364754 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5hdsg_kube-system(989be597-1452-421c-833d-fc10aad2a4c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5hdsg_kube-system(989be597-1452-421c-833d-fc10aad2a4c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5hdsg" podUID="989be597-1452-421c-833d-fc10aad2a4c3" Apr 30 01:21:34.382414 containerd[1492]: time="2025-04-30T01:21:34.382355097Z" level=error msg="Failed to destroy network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.383068 containerd[1492]: time="2025-04-30T01:21:34.383024591Z" level=error msg="encountered an error cleaning up failed sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.383224 containerd[1492]: time="2025-04-30T01:21:34.383198995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kk96k,Uid:73a62f7b-09e4-4b19-a621-c45cfcdd6957,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.383686 kubelet[2758]: E0430 01:21:34.383650 2758 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.383833 kubelet[2758]: E0430 01:21:34.383814 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kk96k" Apr 30 01:21:34.384012 kubelet[2758]: E0430 01:21:34.383925 2758 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kk96k" Apr 30 01:21:34.384339 kubelet[2758]: E0430 01:21:34.384139 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kk96k_kube-system(73a62f7b-09e4-4b19-a621-c45cfcdd6957)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kk96k_kube-system(73a62f7b-09e4-4b19-a621-c45cfcdd6957)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kk96k" podUID="73a62f7b-09e4-4b19-a621-c45cfcdd6957" Apr 30 01:21:34.399070 containerd[1492]: time="2025-04-30T01:21:34.398981567Z" level=error msg="Failed to destroy network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.399397 containerd[1492]: time="2025-04-30T01:21:34.399365015Z" level=error msg="encountered an error cleaning up failed sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.399443 containerd[1492]: time="2025-04-30T01:21:34.399421417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7845cbd476-mzk7k,Uid:c0c262a3-b472-411f-9366-fa54ff571684,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.400261 kubelet[2758]: E0430 01:21:34.399831 2758 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.400261 kubelet[2758]: E0430 01:21:34.399899 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7845cbd476-mzk7k" Apr 30 01:21:34.400261 kubelet[2758]: E0430 01:21:34.399928 2758 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7845cbd476-mzk7k" Apr 30 01:21:34.400477 kubelet[2758]: E0430 01:21:34.399973 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7845cbd476-mzk7k_calico-apiserver(c0c262a3-b472-411f-9366-fa54ff571684)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7845cbd476-mzk7k_calico-apiserver(c0c262a3-b472-411f-9366-fa54ff571684)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7845cbd476-mzk7k" podUID="c0c262a3-b472-411f-9366-fa54ff571684" Apr 30 01:21:34.407920 containerd[1492]: time="2025-04-30T01:21:34.407775833Z" level=error msg="Failed to destroy network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.409010 containerd[1492]: time="2025-04-30T01:21:34.408273363Z" level=error msg="encountered an error cleaning up failed sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.409010 containerd[1492]: time="2025-04-30T01:21:34.408330164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65bb9bf8f8-mz854,Uid:94b1a086-40fc-41bb-8514-0b3e4bfe8cc0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.409250 kubelet[2758]: E0430 01:21:34.408602 2758 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.409250 kubelet[2758]: E0430 01:21:34.408656 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65bb9bf8f8-mz854" Apr 30 01:21:34.409250 kubelet[2758]: E0430 01:21:34.408677 2758 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65bb9bf8f8-mz854" Apr 30 01:21:34.409350 kubelet[2758]: E0430 01:21:34.408733 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65bb9bf8f8-mz854_calico-system(94b1a086-40fc-41bb-8514-0b3e4bfe8cc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65bb9bf8f8-mz854_calico-system(94b1a086-40fc-41bb-8514-0b3e4bfe8cc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65bb9bf8f8-mz854" podUID="94b1a086-40fc-41bb-8514-0b3e4bfe8cc0" Apr 30 01:21:34.481400 kubelet[2758]: I0430 01:21:34.480846 2758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:21:34.483277 containerd[1492]: time="2025-04-30T01:21:34.483239384Z" level=info msg="StopPodSandbox for \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\"" Apr 30 01:21:34.484142 containerd[1492]: time="2025-04-30T01:21:34.483788875Z" level=info msg="Ensure that sandbox dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b in task-service has been cleanup successfully" Apr 30 01:21:34.484617 kubelet[2758]: I0430 01:21:34.484592 2758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:21:34.485643 containerd[1492]: time="2025-04-30T01:21:34.485529992Z" level=info msg="StopPodSandbox for \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\"" Apr 30 01:21:34.486220 containerd[1492]: time="2025-04-30T01:21:34.485987962Z" level=info msg="Ensure that sandbox e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67 in task-service has been cleanup successfully" Apr 30 01:21:34.489018 kubelet[2758]: I0430 01:21:34.488057 2758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:21:34.489139 containerd[1492]: time="2025-04-30T01:21:34.488631538Z" level=info msg="StopPodSandbox for \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\"" Apr 30 01:21:34.489139 containerd[1492]: time="2025-04-30T01:21:34.488842902Z" level=info msg="Ensure that sandbox d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5 in task-service has been cleanup successfully" Apr 30 01:21:34.501457 containerd[1492]: time="2025-04-30T01:21:34.501300685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 01:21:34.512906 kubelet[2758]: I0430 01:21:34.512839 2758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:21:34.514742 containerd[1492]: time="2025-04-30T01:21:34.514148836Z" level=info msg="StopPodSandbox for \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\"" Apr 30 01:21:34.516221 containerd[1492]: time="2025-04-30T01:21:34.516160118Z" level=info msg="Ensure that sandbox 73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06 in task-service has been cleanup successfully" Apr 30 01:21:34.523902 kubelet[2758]: I0430 01:21:34.522996 2758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:21:34.524441 containerd[1492]: time="2025-04-30T01:21:34.524409932Z" level=info msg="StopPodSandbox for \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\"" Apr 30 01:21:34.526738 containerd[1492]: time="2025-04-30T01:21:34.525465794Z" level=info msg="Ensure that sandbox eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7 in task-service has been cleanup successfully" Apr 30 01:21:34.578401 containerd[1492]: time="2025-04-30T01:21:34.577888859Z" level=error msg="StopPodSandbox for \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\" failed" error="failed to destroy network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.578531 kubelet[2758]: E0430 01:21:34.578170 2758 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:21:34.578531 kubelet[2758]: E0430 01:21:34.578229 2758 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b"} Apr 30 01:21:34.578531 kubelet[2758]: E0430 01:21:34.578292 2758 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94b1a086-40fc-41bb-8514-0b3e4bfe8cc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:34.578531 kubelet[2758]: E0430 01:21:34.578315 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94b1a086-40fc-41bb-8514-0b3e4bfe8cc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65bb9bf8f8-mz854" podUID="94b1a086-40fc-41bb-8514-0b3e4bfe8cc0" Apr 30 01:21:34.589462 containerd[1492]: time="2025-04-30T01:21:34.588808570Z" level=error msg="StopPodSandbox for \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\" failed" error="failed to destroy network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.589682 kubelet[2758]: E0430 01:21:34.589193 2758 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:21:34.589682 kubelet[2758]: E0430 01:21:34.589281 2758 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06"} Apr 30 01:21:34.589682 kubelet[2758]: E0430 01:21:34.589346 2758 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0c262a3-b472-411f-9366-fa54ff571684\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:34.589682 kubelet[2758]: E0430 01:21:34.589392 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0c262a3-b472-411f-9366-fa54ff571684\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7845cbd476-mzk7k" podUID="c0c262a3-b472-411f-9366-fa54ff571684" Apr 30 01:21:34.591370 containerd[1492]: time="2025-04-30T01:21:34.591296942Z" level=error msg="StopPodSandbox for \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\" failed" error="failed to destroy network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.594098 kubelet[2758]: E0430 01:21:34.593907 2758 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:21:34.594098 kubelet[2758]: E0430 01:21:34.593962 2758 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5"} Apr 30 01:21:34.594098 kubelet[2758]: E0430 01:21:34.593995 2758 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d8395dc-833d-4a09-97bb-4b7ca67c4458\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:34.594098 kubelet[2758]: E0430 01:21:34.594054 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d8395dc-833d-4a09-97bb-4b7ca67c4458\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7845cbd476-bf752" podUID="1d8395dc-833d-4a09-97bb-4b7ca67c4458" Apr 30 01:21:34.600110 containerd[1492]: time="2025-04-30T01:21:34.600053567Z" level=error msg="StopPodSandbox for \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\" failed" error="failed to destroy network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.600623 kubelet[2758]: E0430 01:21:34.600323 2758 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:21:34.600623 kubelet[2758]: E0430 01:21:34.600370 2758 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67"} Apr 30 01:21:34.600623 kubelet[2758]: E0430 01:21:34.600401 2758 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"989be597-1452-421c-833d-fc10aad2a4c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:34.600623 kubelet[2758]: E0430 01:21:34.600429 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"989be597-1452-421c-833d-fc10aad2a4c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5hdsg" podUID="989be597-1452-421c-833d-fc10aad2a4c3" Apr 30 01:21:34.608091 containerd[1492]: time="2025-04-30T01:21:34.608016135Z" level=error msg="StopPodSandbox for \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\" failed" error="failed to destroy network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:34.608864 kubelet[2758]: E0430 01:21:34.608458 2758 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:21:34.608864 kubelet[2758]: E0430 01:21:34.608553 2758 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7"} Apr 30 01:21:34.608864 kubelet[2758]: E0430 01:21:34.608616 2758 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73a62f7b-09e4-4b19-a621-c45cfcdd6957\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:34.608864 kubelet[2758]: E0430 01:21:34.608654 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73a62f7b-09e4-4b19-a621-c45cfcdd6957\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kk96k" podUID="73a62f7b-09e4-4b19-a621-c45cfcdd6957" Apr 30 01:21:35.064948 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67-shm.mount: Deactivated successfully. Apr 30 01:21:35.065044 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5-shm.mount: Deactivated successfully. Apr 30 01:21:35.352574 systemd[1]: Created slice kubepods-besteffort-podc03937bb_3188_4349_9e30_94ddeb810bb2.slice - libcontainer container kubepods-besteffort-podc03937bb_3188_4349_9e30_94ddeb810bb2.slice. Apr 30 01:21:35.356293 containerd[1492]: time="2025-04-30T01:21:35.355856373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7snxl,Uid:c03937bb-3188-4349-9e30-94ddeb810bb2,Namespace:calico-system,Attempt:0,}" Apr 30 01:21:35.428407 containerd[1492]: time="2025-04-30T01:21:35.428324740Z" level=error msg="Failed to destroy network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:35.428949 containerd[1492]: time="2025-04-30T01:21:35.428856071Z" level=error msg="encountered an error cleaning up failed sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:35.430827 containerd[1492]: time="2025-04-30T01:21:35.428929473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7snxl,Uid:c03937bb-3188-4349-9e30-94ddeb810bb2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:35.431155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78-shm.mount: Deactivated successfully. Apr 30 01:21:35.431257 kubelet[2758]: E0430 01:21:35.431125 2758 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:35.431257 kubelet[2758]: E0430 01:21:35.431193 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7snxl" Apr 30 01:21:35.431257 kubelet[2758]: E0430 01:21:35.431215 2758 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7snxl" Apr 30 01:21:35.431826 kubelet[2758]: E0430 01:21:35.431305 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7snxl_calico-system(c03937bb-3188-4349-9e30-94ddeb810bb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7snxl_calico-system(c03937bb-3188-4349-9e30-94ddeb810bb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7snxl" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" Apr 30 01:21:35.527042 kubelet[2758]: I0430 01:21:35.526967 2758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:21:35.529550 containerd[1492]: time="2025-04-30T01:21:35.528477545Z" level=info msg="StopPodSandbox for \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\"" Apr 30 01:21:35.529550 containerd[1492]: time="2025-04-30T01:21:35.528739471Z" level=info msg="Ensure that sandbox 1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78 in task-service has been cleanup successfully" Apr 30 01:21:35.562271 containerd[1492]: time="2025-04-30T01:21:35.562212915Z" level=error msg="StopPodSandbox for \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\" failed" error="failed to destroy network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:35.562614 kubelet[2758]: E0430 01:21:35.562531 2758 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:21:35.562675 kubelet[2758]: E0430 01:21:35.562608 2758 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78"} Apr 30 01:21:35.562675 kubelet[2758]: E0430 01:21:35.562659 2758 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c03937bb-3188-4349-9e30-94ddeb810bb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:35.562777 kubelet[2758]: E0430 01:21:35.562684 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c03937bb-3188-4349-9e30-94ddeb810bb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7snxl" podUID="c03937bb-3188-4349-9e30-94ddeb810bb2" Apr 30 01:21:40.940947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2918540179.mount: Deactivated successfully. Apr 30 01:21:40.975702 containerd[1492]: time="2025-04-30T01:21:40.974583924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:40.976333 containerd[1492]: time="2025-04-30T01:21:40.976290005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 01:21:40.977413 containerd[1492]: time="2025-04-30T01:21:40.977363391Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:40.980747 containerd[1492]: time="2025-04-30T01:21:40.980660950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:40.982316 containerd[1492]: time="2025-04-30T01:21:40.981686535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.480225487s" Apr 30 01:21:40.982316 containerd[1492]: time="2025-04-30T01:21:40.981742976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 01:21:41.002018 containerd[1492]: time="2025-04-30T01:21:41.001976424Z" level=info msg="CreateContainer within sandbox \"ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 01:21:41.022646 containerd[1492]: time="2025-04-30T01:21:41.022563728Z" level=info msg="CreateContainer within sandbox \"ba676cfeed53e30a1602ddd346ef1722e1bd763287f786bd968fb9d2c1bb97b9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7c705c3698d18d9083f9241adc8656b0ee0688f893fbc4392f228ccbb04b2b6c\"" Apr 30 01:21:41.024804 containerd[1492]: time="2025-04-30T01:21:41.023900641Z" level=info msg="StartContainer for \"7c705c3698d18d9083f9241adc8656b0ee0688f893fbc4392f228ccbb04b2b6c\"" Apr 30 01:21:41.060017 systemd[1]: Started cri-containerd-7c705c3698d18d9083f9241adc8656b0ee0688f893fbc4392f228ccbb04b2b6c.scope - libcontainer container 7c705c3698d18d9083f9241adc8656b0ee0688f893fbc4392f228ccbb04b2b6c. Apr 30 01:21:41.097403 containerd[1492]: time="2025-04-30T01:21:41.097290200Z" level=info msg="StartContainer for \"7c705c3698d18d9083f9241adc8656b0ee0688f893fbc4392f228ccbb04b2b6c\" returns successfully" Apr 30 01:21:41.213999 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 01:21:41.214126 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 01:21:41.578629 kubelet[2758]: I0430 01:21:41.577494 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-699nh" podStartSLOduration=1.6009965149999998 podStartE2EDuration="18.57747553s" podCreationTimestamp="2025-04-30 01:21:23 +0000 UTC" firstStartedPulling="2025-04-30 01:21:24.00644835 +0000 UTC m=+21.811690310" lastFinishedPulling="2025-04-30 01:21:40.982927365 +0000 UTC m=+38.788169325" observedRunningTime="2025-04-30 01:21:41.577165962 +0000 UTC m=+39.382407922" watchObservedRunningTime="2025-04-30 01:21:41.57747553 +0000 UTC m=+39.382717490" Apr 30 01:21:42.993753 kernel: bpftool[4029]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 01:21:43.225345 systemd-networkd[1376]: vxlan.calico: Link UP Apr 30 01:21:43.225354 systemd-networkd[1376]: vxlan.calico: Gained carrier Apr 30 01:21:44.520008 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Apr 30 01:21:45.348828 containerd[1492]: time="2025-04-30T01:21:45.347070221Z" level=info msg="StopPodSandbox for \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\"" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.447 [INFO][4123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.447 [INFO][4123] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" iface="eth0" netns="/var/run/netns/cni-76bc4080-ece0-89e3-20fd-684e8c9d00c4" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.447 [INFO][4123] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" iface="eth0" netns="/var/run/netns/cni-76bc4080-ece0-89e3-20fd-684e8c9d00c4" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.448 [INFO][4123] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" iface="eth0" netns="/var/run/netns/cni-76bc4080-ece0-89e3-20fd-684e8c9d00c4" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.448 [INFO][4123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.448 [INFO][4123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.505 [INFO][4130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.505 [INFO][4130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.505 [INFO][4130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.522 [WARNING][4130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.522 [INFO][4130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.527 [INFO][4130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:45.533846 containerd[1492]: 2025-04-30 01:21:45.531 [INFO][4123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:21:45.536150 containerd[1492]: time="2025-04-30T01:21:45.534482962Z" level=info msg="TearDown network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\" successfully" Apr 30 01:21:45.536150 containerd[1492]: time="2025-04-30T01:21:45.534795530Z" level=info msg="StopPodSandbox for \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\" returns successfully" Apr 30 01:21:45.537861 containerd[1492]: time="2025-04-30T01:21:45.537328436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7845cbd476-mzk7k,Uid:c0c262a3-b472-411f-9366-fa54ff571684,Namespace:calico-apiserver,Attempt:1,}" Apr 30 01:21:45.537483 systemd[1]: run-netns-cni\x2d76bc4080\x2dece0\x2d89e3\x2d20fd\x2d684e8c9d00c4.mount: Deactivated successfully. Apr 30 01:21:45.711087 systemd-networkd[1376]: cali91d2100b650: Link UP Apr 30 01:21:45.711544 systemd-networkd[1376]: cali91d2100b650: Gained carrier Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.612 [INFO][4138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0 calico-apiserver-7845cbd476- calico-apiserver c0c262a3-b472-411f-9366-fa54ff571684 772 0 2025-04-30 01:21:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7845cbd476 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-a-62378e86a2 calico-apiserver-7845cbd476-mzk7k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali91d2100b650 [] []}} ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-mzk7k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.612 [INFO][4138] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-mzk7k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.643 [INFO][4150] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" HandleID="k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.662 [INFO][4150] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" HandleID="k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030b530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-a-62378e86a2", "pod":"calico-apiserver-7845cbd476-mzk7k", "timestamp":"2025-04-30 01:21:45.643847222 +0000 UTC"}, Hostname:"ci-4081-3-3-a-62378e86a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.662 [INFO][4150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.662 [INFO][4150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.662 [INFO][4150] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-a-62378e86a2' Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.665 [INFO][4150] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.672 [INFO][4150] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.679 [INFO][4150] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.682 [INFO][4150] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.685 [INFO][4150] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.685 [INFO][4150] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.688 [INFO][4150] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250 Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.693 [INFO][4150] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.699 [INFO][4150] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.193/26] block=192.168.115.192/26 handle="k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.700 [INFO][4150] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.193/26] handle="k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.700 [INFO][4150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:45.732742 containerd[1492]: 2025-04-30 01:21:45.700 [INFO][4150] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.193/26] IPv6=[] ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" HandleID="k8s-pod-network.93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.734491 containerd[1492]: 2025-04-30 01:21:45.703 [INFO][4138] cni-plugin/k8s.go 386: Populated endpoint ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-mzk7k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0", GenerateName:"calico-apiserver-7845cbd476-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0c262a3-b472-411f-9366-fa54ff571684", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7845cbd476", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"", Pod:"calico-apiserver-7845cbd476-mzk7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91d2100b650", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:45.734491 containerd[1492]: 2025-04-30 01:21:45.703 [INFO][4138] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.193/32] ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-mzk7k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.734491 containerd[1492]: 2025-04-30 01:21:45.703 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91d2100b650 ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-mzk7k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.734491 containerd[1492]: 2025-04-30 01:21:45.711 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-mzk7k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.734491 containerd[1492]: 2025-04-30 01:21:45.714 [INFO][4138] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-mzk7k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0", GenerateName:"calico-apiserver-7845cbd476-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0c262a3-b472-411f-9366-fa54ff571684", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7845cbd476", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250", Pod:"calico-apiserver-7845cbd476-mzk7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91d2100b650", MAC:"a6:38:29:0d:52:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:45.734491 containerd[1492]: 2025-04-30 01:21:45.729 [INFO][4138] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-mzk7k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:21:45.758466 containerd[1492]: time="2025-04-30T01:21:45.758369377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:45.758831 containerd[1492]: time="2025-04-30T01:21:45.758691265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:45.758831 containerd[1492]: time="2025-04-30T01:21:45.758772468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:45.759572 containerd[1492]: time="2025-04-30T01:21:45.759505087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:45.784973 systemd[1]: Started cri-containerd-93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250.scope - libcontainer container 93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250. Apr 30 01:21:45.822426 containerd[1492]: time="2025-04-30T01:21:45.822322770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7845cbd476-mzk7k,Uid:c0c262a3-b472-411f-9366-fa54ff571684,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250\"" Apr 30 01:21:45.825818 containerd[1492]: time="2025-04-30T01:21:45.825784500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 01:21:47.347071 containerd[1492]: time="2025-04-30T01:21:47.347013001Z" level=info msg="StopPodSandbox for \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\"" Apr 30 01:21:47.349179 containerd[1492]: time="2025-04-30T01:21:47.347014201Z" level=info msg="StopPodSandbox for \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\"" Apr 30 01:21:47.351340 containerd[1492]: time="2025-04-30T01:21:47.347116724Z" level=info msg="StopPodSandbox for \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\"" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.439 [INFO][4250] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.440 [INFO][4250] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" iface="eth0" netns="/var/run/netns/cni-d7d828bd-e1df-b294-ecb0-fb2d9c87dc7f" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.440 [INFO][4250] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" iface="eth0" netns="/var/run/netns/cni-d7d828bd-e1df-b294-ecb0-fb2d9c87dc7f" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.441 [INFO][4250] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" iface="eth0" netns="/var/run/netns/cni-d7d828bd-e1df-b294-ecb0-fb2d9c87dc7f" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.441 [INFO][4250] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.441 [INFO][4250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.498 [INFO][4268] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.499 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.500 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.528 [WARNING][4268] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.528 [INFO][4268] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.539 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:47.552734 containerd[1492]: 2025-04-30 01:21:47.546 [INFO][4250] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:21:47.552734 containerd[1492]: time="2025-04-30T01:21:47.550867085Z" level=info msg="TearDown network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\" successfully" Apr 30 01:21:47.552734 containerd[1492]: time="2025-04-30T01:21:47.550908006Z" level=info msg="StopPodSandbox for \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\" returns successfully" Apr 30 01:21:47.552734 containerd[1492]: time="2025-04-30T01:21:47.551982715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7845cbd476-bf752,Uid:1d8395dc-833d-4a09-97bb-4b7ca67c4458,Namespace:calico-apiserver,Attempt:1,}" Apr 30 01:21:47.555819 systemd[1]: run-netns-cni\x2dd7d828bd\x2de1df\x2db294\x2decb0\x2dfb2d9c87dc7f.mount: Deactivated successfully. Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.498 [INFO][4243] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.499 [INFO][4243] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" iface="eth0" netns="/var/run/netns/cni-cd099d93-6b22-edc8-5a54-088436d04b1b" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.504 [INFO][4243] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" iface="eth0" netns="/var/run/netns/cni-cd099d93-6b22-edc8-5a54-088436d04b1b" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.505 [INFO][4243] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" iface="eth0" netns="/var/run/netns/cni-cd099d93-6b22-edc8-5a54-088436d04b1b" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.508 [INFO][4243] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.508 [INFO][4243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.580 [INFO][4277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.583 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.583 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.619 [WARNING][4277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.619 [INFO][4277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.622 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:47.632073 containerd[1492]: 2025-04-30 01:21:47.630 [INFO][4243] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:21:47.635632 systemd[1]: run-netns-cni\x2dcd099d93\x2d6b22\x2dedc8\x2d5a54\x2d088436d04b1b.mount: Deactivated successfully. Apr 30 01:21:47.636943 containerd[1492]: time="2025-04-30T01:21:47.636617191Z" level=info msg="TearDown network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\" successfully" Apr 30 01:21:47.636943 containerd[1492]: time="2025-04-30T01:21:47.636651232Z" level=info msg="StopPodSandbox for \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\" returns successfully" Apr 30 01:21:47.638978 containerd[1492]: time="2025-04-30T01:21:47.638311237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kk96k,Uid:73a62f7b-09e4-4b19-a621-c45cfcdd6957,Namespace:kube-system,Attempt:1,}" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.517 [INFO][4258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.518 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" iface="eth0" netns="/var/run/netns/cni-a00e6275-b892-e6f7-92b2-faf67dccbbb3" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.518 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" iface="eth0" netns="/var/run/netns/cni-a00e6275-b892-e6f7-92b2-faf67dccbbb3" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.518 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" iface="eth0" netns="/var/run/netns/cni-a00e6275-b892-e6f7-92b2-faf67dccbbb3" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.519 [INFO][4258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.519 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.653 [INFO][4282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.657 [INFO][4282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.657 [INFO][4282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.696 [WARNING][4282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.696 [INFO][4282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.699 [INFO][4282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:47.711127 containerd[1492]: 2025-04-30 01:21:47.707 [INFO][4258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:21:47.711683 containerd[1492]: time="2025-04-30T01:21:47.711327081Z" level=info msg="TearDown network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\" successfully" Apr 30 01:21:47.711683 containerd[1492]: time="2025-04-30T01:21:47.711357321Z" level=info msg="StopPodSandbox for \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\" returns successfully" Apr 30 01:21:47.712168 containerd[1492]: time="2025-04-30T01:21:47.712138582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hdsg,Uid:989be597-1452-421c-833d-fc10aad2a4c3,Namespace:kube-system,Attempt:1,}" Apr 30 01:21:47.782952 systemd-networkd[1376]: cali91d2100b650: Gained IPv6LL Apr 30 01:21:47.859537 systemd-networkd[1376]: calib2fff9e1599: Link UP Apr 30 01:21:47.864171 systemd-networkd[1376]: calib2fff9e1599: Gained carrier Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.710 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0 calico-apiserver-7845cbd476- calico-apiserver 1d8395dc-833d-4a09-97bb-4b7ca67c4458 782 0 2025-04-30 01:21:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7845cbd476 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-a-62378e86a2 calico-apiserver-7845cbd476-bf752 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2fff9e1599 [] []}} ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-bf752" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.710 [INFO][4289] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-bf752" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.777 [INFO][4318] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" HandleID="k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.800 [INFO][4318] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" HandleID="k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-a-62378e86a2", "pod":"calico-apiserver-7845cbd476-bf752", "timestamp":"2025-04-30 01:21:47.777183212 +0000 UTC"}, Hostname:"ci-4081-3-3-a-62378e86a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.800 [INFO][4318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.801 [INFO][4318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.801 [INFO][4318] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-a-62378e86a2' Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.805 [INFO][4318] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.812 [INFO][4318] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.822 [INFO][4318] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.825 [INFO][4318] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.829 [INFO][4318] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.829 [INFO][4318] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.831 [INFO][4318] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.842 [INFO][4318] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.851 [INFO][4318] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.194/26] block=192.168.115.192/26 handle="k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.851 [INFO][4318] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.194/26] handle="k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.851 [INFO][4318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:47.889879 containerd[1492]: 2025-04-30 01:21:47.851 [INFO][4318] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.194/26] IPv6=[] ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" HandleID="k8s-pod-network.c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.891624 containerd[1492]: 2025-04-30 01:21:47.853 [INFO][4289] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-bf752" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0", GenerateName:"calico-apiserver-7845cbd476-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d8395dc-833d-4a09-97bb-4b7ca67c4458", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7845cbd476", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"", Pod:"calico-apiserver-7845cbd476-bf752", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2fff9e1599", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:47.891624 containerd[1492]: 2025-04-30 01:21:47.853 [INFO][4289] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.194/32] ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-bf752" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.891624 containerd[1492]: 2025-04-30 01:21:47.853 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2fff9e1599 ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-bf752" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.891624 containerd[1492]: 2025-04-30 01:21:47.865 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-bf752" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.891624 containerd[1492]: 2025-04-30 01:21:47.869 [INFO][4289] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-bf752" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0", GenerateName:"calico-apiserver-7845cbd476-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d8395dc-833d-4a09-97bb-4b7ca67c4458", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7845cbd476", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d", Pod:"calico-apiserver-7845cbd476-bf752", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2fff9e1599", MAC:"a6:cf:34:98:e8:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:47.891624 containerd[1492]: 2025-04-30 01:21:47.882 [INFO][4289] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d" Namespace="calico-apiserver" Pod="calico-apiserver-7845cbd476-bf752" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:21:47.930874 containerd[1492]: time="2025-04-30T01:21:47.927749382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:47.931065 containerd[1492]: time="2025-04-30T01:21:47.930835945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:47.931065 containerd[1492]: time="2025-04-30T01:21:47.931050511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:47.931430 containerd[1492]: time="2025-04-30T01:21:47.931280997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:47.943465 systemd-networkd[1376]: cali00fa1f1cc3d: Link UP Apr 30 01:21:47.945917 systemd-networkd[1376]: cali00fa1f1cc3d: Gained carrier Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.746 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0 coredns-7db6d8ff4d- kube-system 73a62f7b-09e4-4b19-a621-c45cfcdd6957 784 0 2025-04-30 01:21:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-a-62378e86a2 coredns-7db6d8ff4d-kk96k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali00fa1f1cc3d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kk96k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.747 [INFO][4303] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kk96k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.815 [INFO][4333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" HandleID="k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.842 [INFO][4333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" HandleID="k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ba960), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-a-62378e86a2", "pod":"coredns-7db6d8ff4d-kk96k", "timestamp":"2025-04-30 01:21:47.814380253 +0000 UTC"}, Hostname:"ci-4081-3-3-a-62378e86a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.842 [INFO][4333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.851 [INFO][4333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.852 [INFO][4333] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-a-62378e86a2' Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.856 [INFO][4333] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.872 [INFO][4333] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.884 [INFO][4333] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.892 [INFO][4333] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.899 [INFO][4333] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.899 [INFO][4333] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.902 [INFO][4333] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572 Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.909 [INFO][4333] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.922 [INFO][4333] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.195/26] block=192.168.115.192/26 handle="k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.922 [INFO][4333] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.195/26] handle="k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.922 [INFO][4333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:47.977305 containerd[1492]: 2025-04-30 01:21:47.923 [INFO][4333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.195/26] IPv6=[] ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" HandleID="k8s-pod-network.5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.978940 containerd[1492]: 2025-04-30 01:21:47.932 [INFO][4303] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kk96k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"73a62f7b-09e4-4b19-a621-c45cfcdd6957", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"", Pod:"coredns-7db6d8ff4d-kk96k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00fa1f1cc3d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:47.978940 containerd[1492]: 2025-04-30 01:21:47.933 [INFO][4303] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.195/32] ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kk96k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.978940 containerd[1492]: 2025-04-30 01:21:47.933 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00fa1f1cc3d ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kk96k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.978940 containerd[1492]: 2025-04-30 01:21:47.944 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kk96k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.978940 containerd[1492]: 2025-04-30 01:21:47.948 [INFO][4303] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kk96k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"73a62f7b-09e4-4b19-a621-c45cfcdd6957", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572", Pod:"coredns-7db6d8ff4d-kk96k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00fa1f1cc3d", MAC:"16:b7:04:fe:20:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:47.978940 containerd[1492]: 2025-04-30 01:21:47.965 [INFO][4303] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kk96k" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:21:47.981543 systemd[1]: Started cri-containerd-c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d.scope - libcontainer container c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d. Apr 30 01:21:48.020074 systemd-networkd[1376]: cali698fafdfc95: Link UP Apr 30 01:21:48.022021 systemd-networkd[1376]: cali698fafdfc95: Gained carrier Apr 30 01:21:48.035134 containerd[1492]: time="2025-04-30T01:21:48.034980238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:48.038743 containerd[1492]: time="2025-04-30T01:21:48.035082961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:48.038743 containerd[1492]: time="2025-04-30T01:21:48.035123562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:48.038743 containerd[1492]: time="2025-04-30T01:21:48.035246125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.815 [INFO][4326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0 coredns-7db6d8ff4d- kube-system 989be597-1452-421c-833d-fc10aad2a4c3 785 0 2025-04-30 01:21:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-a-62378e86a2 coredns-7db6d8ff4d-5hdsg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali698fafdfc95 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hdsg" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.816 [INFO][4326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hdsg" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.882 [INFO][4345] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" HandleID="k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.907 [INFO][4345] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" HandleID="k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-a-62378e86a2", "pod":"coredns-7db6d8ff4d-5hdsg", "timestamp":"2025-04-30 01:21:47.882578647 +0000 UTC"}, Hostname:"ci-4081-3-3-a-62378e86a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.907 [INFO][4345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.923 [INFO][4345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.923 [INFO][4345] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-a-62378e86a2' Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.931 [INFO][4345] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.951 [INFO][4345] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.968 [INFO][4345] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.974 [INFO][4345] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.982 [INFO][4345] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.983 [INFO][4345] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.987 [INFO][4345] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35 Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:47.993 [INFO][4345] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:48.007 [INFO][4345] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.196/26] block=192.168.115.192/26 handle="k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:48.007 [INFO][4345] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.196/26] handle="k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:48.007 [INFO][4345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:48.053871 containerd[1492]: 2025-04-30 01:21:48.008 [INFO][4345] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.196/26] IPv6=[] ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" HandleID="k8s-pod-network.159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:48.054763 containerd[1492]: 2025-04-30 01:21:48.013 [INFO][4326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hdsg" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"989be597-1452-421c-833d-fc10aad2a4c3", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"", Pod:"coredns-7db6d8ff4d-5hdsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali698fafdfc95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:48.054763 containerd[1492]: 2025-04-30 01:21:48.014 [INFO][4326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.196/32] ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hdsg" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:48.054763 containerd[1492]: 2025-04-30 01:21:48.014 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali698fafdfc95 ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hdsg" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:48.054763 containerd[1492]: 2025-04-30 01:21:48.026 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hdsg" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:48.054763 containerd[1492]: 2025-04-30 01:21:48.030 [INFO][4326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hdsg" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"989be597-1452-421c-833d-fc10aad2a4c3", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35", Pod:"coredns-7db6d8ff4d-5hdsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali698fafdfc95", MAC:"ca:27:8b:44:8b:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:48.054763 containerd[1492]: 2025-04-30 01:21:48.048 [INFO][4326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hdsg" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:21:48.082938 systemd[1]: Started cri-containerd-5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572.scope - libcontainer container 5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572. Apr 30 01:21:48.088858 containerd[1492]: time="2025-04-30T01:21:48.088063045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:48.088858 containerd[1492]: time="2025-04-30T01:21:48.088124167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:48.088858 containerd[1492]: time="2025-04-30T01:21:48.088143527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:48.088858 containerd[1492]: time="2025-04-30T01:21:48.088235970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:48.114991 containerd[1492]: time="2025-04-30T01:21:48.114947818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7845cbd476-bf752,Uid:1d8395dc-833d-4a09-97bb-4b7ca67c4458,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d\"" Apr 30 01:21:48.124251 systemd[1]: Started cri-containerd-159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35.scope - libcontainer container 159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35. Apr 30 01:21:48.159578 containerd[1492]: time="2025-04-30T01:21:48.159484671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kk96k,Uid:73a62f7b-09e4-4b19-a621-c45cfcdd6957,Namespace:kube-system,Attempt:1,} returns sandbox id \"5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572\"" Apr 30 01:21:48.163999 containerd[1492]: time="2025-04-30T01:21:48.163296775Z" level=info msg="CreateContainer within sandbox \"5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 01:21:48.179977 containerd[1492]: time="2025-04-30T01:21:48.179935149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hdsg,Uid:989be597-1452-421c-833d-fc10aad2a4c3,Namespace:kube-system,Attempt:1,} returns sandbox id \"159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35\"" Apr 30 01:21:48.185342 containerd[1492]: time="2025-04-30T01:21:48.185301575Z" level=info msg="CreateContainer within sandbox \"159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 01:21:48.189186 containerd[1492]: time="2025-04-30T01:21:48.189125159Z" level=info msg="CreateContainer within sandbox \"5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3aa3cab724c734a4aa7eef34ebe9c8a459902e790570a47c63d9a6c8db63aadd\"" Apr 30 01:21:48.190753 containerd[1492]: time="2025-04-30T01:21:48.190490156Z" level=info msg="StartContainer for \"3aa3cab724c734a4aa7eef34ebe9c8a459902e790570a47c63d9a6c8db63aadd\"" Apr 30 01:21:48.208178 containerd[1492]: time="2025-04-30T01:21:48.208017034Z" level=info msg="CreateContainer within sandbox \"159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c85da61270aecef0bd06ac9fbbe4570fba688697a3f98f0e98a19472e7bb69f\"" Apr 30 01:21:48.209738 containerd[1492]: time="2025-04-30T01:21:48.209453873Z" level=info msg="StartContainer for \"2c85da61270aecef0bd06ac9fbbe4570fba688697a3f98f0e98a19472e7bb69f\"" Apr 30 01:21:48.225014 systemd[1]: Started cri-containerd-3aa3cab724c734a4aa7eef34ebe9c8a459902e790570a47c63d9a6c8db63aadd.scope - libcontainer container 3aa3cab724c734a4aa7eef34ebe9c8a459902e790570a47c63d9a6c8db63aadd. Apr 30 01:21:48.250962 systemd[1]: Started cri-containerd-2c85da61270aecef0bd06ac9fbbe4570fba688697a3f98f0e98a19472e7bb69f.scope - libcontainer container 2c85da61270aecef0bd06ac9fbbe4570fba688697a3f98f0e98a19472e7bb69f. Apr 30 01:21:48.283127 containerd[1492]: time="2025-04-30T01:21:48.282993038Z" level=info msg="StartContainer for \"3aa3cab724c734a4aa7eef34ebe9c8a459902e790570a47c63d9a6c8db63aadd\" returns successfully" Apr 30 01:21:48.299865 containerd[1492]: time="2025-04-30T01:21:48.299673932Z" level=info msg="StartContainer for \"2c85da61270aecef0bd06ac9fbbe4570fba688697a3f98f0e98a19472e7bb69f\" returns successfully" Apr 30 01:21:48.562418 systemd[1]: run-netns-cni\x2da00e6275\x2db892\x2de6f7\x2d92b2\x2dfaf67dccbbb3.mount: Deactivated successfully. Apr 30 01:21:48.646500 kubelet[2758]: I0430 01:21:48.645670 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5hdsg" podStartSLOduration=33.645651201 podStartE2EDuration="33.645651201s" podCreationTimestamp="2025-04-30 01:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:48.619257602 +0000 UTC m=+46.424499562" watchObservedRunningTime="2025-04-30 01:21:48.645651201 +0000 UTC m=+46.450893161" Apr 30 01:21:48.676396 kubelet[2758]: I0430 01:21:48.676248 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kk96k" podStartSLOduration=33.676228395 podStartE2EDuration="33.676228395s" podCreationTimestamp="2025-04-30 01:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:48.676180114 +0000 UTC m=+46.481422074" watchObservedRunningTime="2025-04-30 01:21:48.676228395 +0000 UTC m=+46.481470395" Apr 30 01:21:48.999433 systemd-networkd[1376]: calib2fff9e1599: Gained IPv6LL Apr 30 01:21:49.126847 systemd-networkd[1376]: cali698fafdfc95: Gained IPv6LL Apr 30 01:21:49.209780 containerd[1492]: time="2025-04-30T01:21:49.209531521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:49.211609 containerd[1492]: time="2025-04-30T01:21:49.211556017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 01:21:49.214906 containerd[1492]: time="2025-04-30T01:21:49.214603341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 3.388738919s" Apr 30 01:21:49.214906 containerd[1492]: time="2025-04-30T01:21:49.214654942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 01:21:49.216399 containerd[1492]: time="2025-04-30T01:21:49.216207145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 01:21:49.222165 containerd[1492]: time="2025-04-30T01:21:49.221996665Z" level=info msg="CreateContainer within sandbox \"93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 01:21:49.225539 containerd[1492]: time="2025-04-30T01:21:49.225337477Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:49.226250 containerd[1492]: time="2025-04-30T01:21:49.226158940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:49.251218 containerd[1492]: time="2025-04-30T01:21:49.251061787Z" level=info msg="CreateContainer within sandbox \"93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de3693c0c606047f41bd0d5fc8e213836f02e28a87b355430ec8e6974d663999\"" Apr 30 01:21:49.252270 containerd[1492]: time="2025-04-30T01:21:49.252243460Z" level=info msg="StartContainer for \"de3693c0c606047f41bd0d5fc8e213836f02e28a87b355430ec8e6974d663999\"" Apr 30 01:21:49.296956 systemd[1]: Started cri-containerd-de3693c0c606047f41bd0d5fc8e213836f02e28a87b355430ec8e6974d663999.scope - libcontainer container de3693c0c606047f41bd0d5fc8e213836f02e28a87b355430ec8e6974d663999. Apr 30 01:21:49.342223 containerd[1492]: time="2025-04-30T01:21:49.342137341Z" level=info msg="StartContainer for \"de3693c0c606047f41bd0d5fc8e213836f02e28a87b355430ec8e6974d663999\" returns successfully" Apr 30 01:21:49.346429 containerd[1492]: time="2025-04-30T01:21:49.346183212Z" level=info msg="StopPodSandbox for \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\"" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.410 [INFO][4652] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.411 [INFO][4652] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" iface="eth0" netns="/var/run/netns/cni-c5dd6e1a-091b-c917-36ec-1639000069ea" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.412 [INFO][4652] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" iface="eth0" netns="/var/run/netns/cni-c5dd6e1a-091b-c917-36ec-1639000069ea" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.413 [INFO][4652] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" iface="eth0" netns="/var/run/netns/cni-c5dd6e1a-091b-c917-36ec-1639000069ea" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.413 [INFO][4652] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.413 [INFO][4652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.450 [INFO][4661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.450 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.450 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.461 [WARNING][4661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.461 [INFO][4661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.463 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.468131 containerd[1492]: 2025-04-30 01:21:49.464 [INFO][4652] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:21:49.468131 containerd[1492]: time="2025-04-30T01:21:49.466507253Z" level=info msg="TearDown network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\" successfully" Apr 30 01:21:49.468131 containerd[1492]: time="2025-04-30T01:21:49.466546894Z" level=info msg="StopPodSandbox for \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\" returns successfully" Apr 30 01:21:49.468131 containerd[1492]: time="2025-04-30T01:21:49.467274514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65bb9bf8f8-mz854,Uid:94b1a086-40fc-41bb-8514-0b3e4bfe8cc0,Namespace:calico-system,Attempt:1,}" Apr 30 01:21:49.554963 systemd[1]: run-netns-cni\x2dc5dd6e1a\x2d091b\x2dc917\x2d36ec\x2d1639000069ea.mount: Deactivated successfully. Apr 30 01:21:49.575036 systemd-networkd[1376]: cali00fa1f1cc3d: Gained IPv6LL Apr 30 01:21:49.590434 containerd[1492]: time="2025-04-30T01:21:49.590320230Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:49.592856 containerd[1492]: time="2025-04-30T01:21:49.592391127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 01:21:49.596022 containerd[1492]: time="2025-04-30T01:21:49.595149324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 378.888896ms" Apr 30 01:21:49.596022 containerd[1492]: time="2025-04-30T01:21:49.595195565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 01:21:49.642863 containerd[1492]: time="2025-04-30T01:21:49.642740197Z" level=info msg="CreateContainer within sandbox \"c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 01:21:49.654372 systemd-networkd[1376]: cali148ea6d1dc9: Link UP Apr 30 01:21:49.656583 systemd-networkd[1376]: cali148ea6d1dc9: Gained carrier Apr 30 01:21:49.678409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244129545.mount: Deactivated successfully. Apr 30 01:21:49.688334 containerd[1492]: time="2025-04-30T01:21:49.685884708Z" level=info msg="CreateContainer within sandbox \"c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d8e4d7bfcfba16b295381f8a082e8a3989efabea588025d6fc2871b85d3a1a3\"" Apr 30 01:21:49.689060 containerd[1492]: time="2025-04-30T01:21:49.688780388Z" level=info msg="StartContainer for \"8d8e4d7bfcfba16b295381f8a082e8a3989efabea588025d6fc2871b85d3a1a3\"" Apr 30 01:21:49.691723 kubelet[2758]: I0430 01:21:49.691632 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7845cbd476-mzk7k" podStartSLOduration=24.301411628 podStartE2EDuration="27.691610226s" podCreationTimestamp="2025-04-30 01:21:22 +0000 UTC" firstStartedPulling="2025-04-30 01:21:45.825416611 +0000 UTC m=+43.630658571" lastFinishedPulling="2025-04-30 01:21:49.215615249 +0000 UTC m=+47.020857169" observedRunningTime="2025-04-30 01:21:49.672344214 +0000 UTC m=+47.477586174" watchObservedRunningTime="2025-04-30 01:21:49.691610226 +0000 UTC m=+47.496852186" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.520 [INFO][4671] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0 calico-kube-controllers-65bb9bf8f8- calico-system 94b1a086-40fc-41bb-8514-0b3e4bfe8cc0 821 0 2025-04-30 01:21:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65bb9bf8f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-a-62378e86a2 calico-kube-controllers-65bb9bf8f8-mz854 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali148ea6d1dc9 [] []}} ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Namespace="calico-system" Pod="calico-kube-controllers-65bb9bf8f8-mz854" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.520 [INFO][4671] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Namespace="calico-system" Pod="calico-kube-controllers-65bb9bf8f8-mz854" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.560 [INFO][4682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" HandleID="k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.581 [INFO][4682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" HandleID="k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-a-62378e86a2", "pod":"calico-kube-controllers-65bb9bf8f8-mz854", "timestamp":"2025-04-30 01:21:49.560680572 +0000 UTC"}, Hostname:"ci-4081-3-3-a-62378e86a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.582 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.582 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.582 [INFO][4682] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-a-62378e86a2' Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.584 [INFO][4682] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.593 [INFO][4682] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.605 [INFO][4682] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.610 [INFO][4682] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.615 [INFO][4682] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.615 [INFO][4682] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.619 [INFO][4682] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.626 [INFO][4682] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.647 [INFO][4682] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.197/26] block=192.168.115.192/26 handle="k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.647 [INFO][4682] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.197/26] handle="k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.647 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.705069 containerd[1492]: 2025-04-30 01:21:49.647 [INFO][4682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.197/26] IPv6=[] ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" HandleID="k8s-pod-network.cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.706620 containerd[1492]: 2025-04-30 01:21:49.651 [INFO][4671] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Namespace="calico-system" Pod="calico-kube-controllers-65bb9bf8f8-mz854" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0", GenerateName:"calico-kube-controllers-65bb9bf8f8-", Namespace:"calico-system", SelfLink:"", UID:"94b1a086-40fc-41bb-8514-0b3e4bfe8cc0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65bb9bf8f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"", Pod:"calico-kube-controllers-65bb9bf8f8-mz854", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali148ea6d1dc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:49.706620 containerd[1492]: 2025-04-30 01:21:49.651 [INFO][4671] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.197/32] ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Namespace="calico-system" Pod="calico-kube-controllers-65bb9bf8f8-mz854" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.706620 containerd[1492]: 2025-04-30 01:21:49.651 [INFO][4671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali148ea6d1dc9 ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Namespace="calico-system" Pod="calico-kube-controllers-65bb9bf8f8-mz854" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.706620 containerd[1492]: 2025-04-30 01:21:49.658 [INFO][4671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Namespace="calico-system" Pod="calico-kube-controllers-65bb9bf8f8-mz854" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.706620 containerd[1492]: 2025-04-30 01:21:49.658 [INFO][4671] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Namespace="calico-system" Pod="calico-kube-controllers-65bb9bf8f8-mz854" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0", GenerateName:"calico-kube-controllers-65bb9bf8f8-", Namespace:"calico-system", SelfLink:"", UID:"94b1a086-40fc-41bb-8514-0b3e4bfe8cc0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65bb9bf8f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b", Pod:"calico-kube-controllers-65bb9bf8f8-mz854", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali148ea6d1dc9", MAC:"3a:44:cb:29:fc:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:49.706620 containerd[1492]: 2025-04-30 01:21:49.702 [INFO][4671] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b" Namespace="calico-system" Pod="calico-kube-controllers-65bb9bf8f8-mz854" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:21:49.781027 systemd[1]: Started cri-containerd-8d8e4d7bfcfba16b295381f8a082e8a3989efabea588025d6fc2871b85d3a1a3.scope - libcontainer container 8d8e4d7bfcfba16b295381f8a082e8a3989efabea588025d6fc2871b85d3a1a3. Apr 30 01:21:49.786247 containerd[1492]: time="2025-04-30T01:21:49.786128194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:49.786388 containerd[1492]: time="2025-04-30T01:21:49.786277118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:49.786388 containerd[1492]: time="2025-04-30T01:21:49.786312999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:49.788928 containerd[1492]: time="2025-04-30T01:21:49.786979258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:49.819565 systemd[1]: Started cri-containerd-cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b.scope - libcontainer container cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b. Apr 30 01:21:49.841984 containerd[1492]: time="2025-04-30T01:21:49.841936815Z" level=info msg="StartContainer for \"8d8e4d7bfcfba16b295381f8a082e8a3989efabea588025d6fc2871b85d3a1a3\" returns successfully" Apr 30 01:21:49.886236 containerd[1492]: time="2025-04-30T01:21:49.886191756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65bb9bf8f8-mz854,Uid:94b1a086-40fc-41bb-8514-0b3e4bfe8cc0,Namespace:calico-system,Attempt:1,} returns sandbox id \"cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b\"" Apr 30 01:21:49.891963 containerd[1492]: time="2025-04-30T01:21:49.891918554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 01:21:50.556524 systemd[1]: run-containerd-runc-k8s.io-8d8e4d7bfcfba16b295381f8a082e8a3989efabea588025d6fc2871b85d3a1a3-runc.8tR81s.mount: Deactivated successfully. Apr 30 01:21:50.642235 kubelet[2758]: I0430 01:21:50.641537 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 01:21:51.345885 containerd[1492]: time="2025-04-30T01:21:51.345819439Z" level=info msg="StopPodSandbox for \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\"" Apr 30 01:21:51.412001 kubelet[2758]: I0430 01:21:51.410390 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7845cbd476-bf752" podStartSLOduration=27.930363122 podStartE2EDuration="29.410362783s" podCreationTimestamp="2025-04-30 01:21:22 +0000 UTC" firstStartedPulling="2025-04-30 01:21:48.117441126 +0000 UTC m=+45.922683086" lastFinishedPulling="2025-04-30 01:21:49.597440827 +0000 UTC m=+47.402682747" observedRunningTime="2025-04-30 01:21:50.662421159 +0000 UTC m=+48.467663159" watchObservedRunningTime="2025-04-30 01:21:51.410362783 +0000 UTC m=+49.215604863" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.414 [INFO][4800] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.414 [INFO][4800] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" iface="eth0" netns="/var/run/netns/cni-6bff17a9-c968-a451-5fa4-cfbff7ba3657" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.414 [INFO][4800] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" iface="eth0" netns="/var/run/netns/cni-6bff17a9-c968-a451-5fa4-cfbff7ba3657" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.415 [INFO][4800] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" iface="eth0" netns="/var/run/netns/cni-6bff17a9-c968-a451-5fa4-cfbff7ba3657" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.415 [INFO][4800] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.415 [INFO][4800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.447 [INFO][4808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.447 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.447 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.458 [WARNING][4808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.459 [INFO][4808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.461 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:51.466695 containerd[1492]: 2025-04-30 01:21:51.464 [INFO][4800] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:21:51.467224 containerd[1492]: time="2025-04-30T01:21:51.467179868Z" level=info msg="TearDown network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\" successfully" Apr 30 01:21:51.469360 containerd[1492]: time="2025-04-30T01:21:51.467223870Z" level=info msg="StopPodSandbox for \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\" returns successfully" Apr 30 01:21:51.469844 systemd[1]: run-netns-cni\x2d6bff17a9\x2dc968\x2da451\x2d5fa4\x2dcfbff7ba3657.mount: Deactivated successfully. Apr 30 01:21:51.471234 containerd[1492]: time="2025-04-30T01:21:51.470372399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7snxl,Uid:c03937bb-3188-4349-9e30-94ddeb810bb2,Namespace:calico-system,Attempt:1,}" Apr 30 01:21:51.623536 systemd-networkd[1376]: cali148ea6d1dc9: Gained IPv6LL Apr 30 01:21:51.643754 kubelet[2758]: I0430 01:21:51.642686 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 01:21:51.666105 systemd-networkd[1376]: calif7176af9993: Link UP Apr 30 01:21:51.666591 systemd-networkd[1376]: calif7176af9993: Gained carrier Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.553 [INFO][4814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0 csi-node-driver- calico-system c03937bb-3188-4349-9e30-94ddeb810bb2 841 0 2025-04-30 01:21:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-a-62378e86a2 csi-node-driver-7snxl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif7176af9993 [] []}} ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Namespace="calico-system" Pod="csi-node-driver-7snxl" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.553 [INFO][4814] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Namespace="calico-system" Pod="csi-node-driver-7snxl" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.594 [INFO][4827] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" HandleID="k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.611 [INFO][4827] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" HandleID="k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-a-62378e86a2", "pod":"csi-node-driver-7snxl", "timestamp":"2025-04-30 01:21:51.594851516 +0000 UTC"}, Hostname:"ci-4081-3-3-a-62378e86a2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.611 [INFO][4827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.611 [INFO][4827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.611 [INFO][4827] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-a-62378e86a2' Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.614 [INFO][4827] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.620 [INFO][4827] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.632 [INFO][4827] ipam/ipam.go 489: Trying affinity for 192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.636 [INFO][4827] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.640 [INFO][4827] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.192/26 host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.640 [INFO][4827] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.192/26 handle="k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.644 [INFO][4827] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7 Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.650 [INFO][4827] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.192/26 handle="k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.660 [INFO][4827] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.198/26] block=192.168.115.192/26 handle="k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.660 [INFO][4827] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.198/26] handle="k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" host="ci-4081-3-3-a-62378e86a2" Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.660 [INFO][4827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:51.685280 containerd[1492]: 2025-04-30 01:21:51.660 [INFO][4827] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.198/26] IPv6=[] ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" HandleID="k8s-pod-network.bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.687407 containerd[1492]: 2025-04-30 01:21:51.663 [INFO][4814] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Namespace="calico-system" Pod="csi-node-driver-7snxl" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c03937bb-3188-4349-9e30-94ddeb810bb2", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"", Pod:"csi-node-driver-7snxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7176af9993", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:51.687407 containerd[1492]: 2025-04-30 01:21:51.663 [INFO][4814] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.198/32] ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Namespace="calico-system" Pod="csi-node-driver-7snxl" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.687407 containerd[1492]: 2025-04-30 01:21:51.663 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7176af9993 ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Namespace="calico-system" Pod="csi-node-driver-7snxl" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.687407 containerd[1492]: 2025-04-30 01:21:51.665 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Namespace="calico-system" Pod="csi-node-driver-7snxl" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.687407 containerd[1492]: 2025-04-30 01:21:51.666 [INFO][4814] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Namespace="calico-system" Pod="csi-node-driver-7snxl" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c03937bb-3188-4349-9e30-94ddeb810bb2", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7", Pod:"csi-node-driver-7snxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7176af9993", MAC:"3a:8a:68:0f:07:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:51.687407 containerd[1492]: 2025-04-30 01:21:51.681 [INFO][4814] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7" Namespace="calico-system" Pod="csi-node-driver-7snxl" WorkloadEndpoint="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:21:51.713155 containerd[1492]: time="2025-04-30T01:21:51.712772008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:51.713155 containerd[1492]: time="2025-04-30T01:21:51.712963973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:51.713155 containerd[1492]: time="2025-04-30T01:21:51.713021015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:51.713370 containerd[1492]: time="2025-04-30T01:21:51.713197620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:51.740022 systemd[1]: Started cri-containerd-bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7.scope - libcontainer container bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7. Apr 30 01:21:51.769607 containerd[1492]: time="2025-04-30T01:21:51.769542372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7snxl,Uid:c03937bb-3188-4349-9e30-94ddeb810bb2,Namespace:calico-system,Attempt:1,} returns sandbox id \"bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7\"" Apr 30 01:21:52.380759 containerd[1492]: time="2025-04-30T01:21:52.380628117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:52.382563 containerd[1492]: time="2025-04-30T01:21:52.382117120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 01:21:52.383534 containerd[1492]: time="2025-04-30T01:21:52.383479479Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:52.386849 containerd[1492]: time="2025-04-30T01:21:52.386703171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:52.387620 containerd[1492]: time="2025-04-30T01:21:52.387573075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.49560512s" Apr 30 01:21:52.387620 containerd[1492]: time="2025-04-30T01:21:52.387617557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 01:21:52.390346 containerd[1492]: time="2025-04-30T01:21:52.390181630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 01:21:52.407183 containerd[1492]: time="2025-04-30T01:21:52.407008311Z" level=info msg="CreateContainer within sandbox \"cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 01:21:52.443998 containerd[1492]: time="2025-04-30T01:21:52.443923525Z" level=info msg="CreateContainer within sandbox \"cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a\"" Apr 30 01:21:52.446129 containerd[1492]: time="2025-04-30T01:21:52.444761029Z" level=info msg="StartContainer for \"f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a\"" Apr 30 01:21:52.480968 systemd[1]: Started cri-containerd-f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a.scope - libcontainer container f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a. Apr 30 01:21:52.522129 containerd[1492]: time="2025-04-30T01:21:52.522033797Z" level=info msg="StartContainer for \"f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a\" returns successfully" Apr 30 01:21:52.678018 kubelet[2758]: I0430 01:21:52.676073 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65bb9bf8f8-mz854" podStartSLOduration=27.176483046 podStartE2EDuration="29.676055517s" podCreationTimestamp="2025-04-30 01:21:23 +0000 UTC" firstStartedPulling="2025-04-30 01:21:49.889811136 +0000 UTC m=+47.695053096" lastFinishedPulling="2025-04-30 01:21:52.389383647 +0000 UTC m=+50.194625567" observedRunningTime="2025-04-30 01:21:52.672702821 +0000 UTC m=+50.477944781" watchObservedRunningTime="2025-04-30 01:21:52.676055517 +0000 UTC m=+50.481297437" Apr 30 01:21:52.719442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523934206.mount: Deactivated successfully. Apr 30 01:21:53.351315 systemd-networkd[1376]: calif7176af9993: Gained IPv6LL Apr 30 01:21:53.755693 containerd[1492]: time="2025-04-30T01:21:53.754904167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:53.760658 containerd[1492]: time="2025-04-30T01:21:53.760609732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 01:21:53.763088 containerd[1492]: time="2025-04-30T01:21:53.763018561Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:53.767502 containerd[1492]: time="2025-04-30T01:21:53.767445169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:53.770102 containerd[1492]: time="2025-04-30T01:21:53.769241181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.37901995s" Apr 30 01:21:53.770102 containerd[1492]: time="2025-04-30T01:21:53.769298023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 01:21:53.774949 containerd[1492]: time="2025-04-30T01:21:53.774776821Z" level=info msg="CreateContainer within sandbox \"bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 01:21:53.822747 containerd[1492]: time="2025-04-30T01:21:53.821494650Z" level=info msg="CreateContainer within sandbox \"bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c474fc6b6d61d8520981050c201db4f29210f411b23ce6c2099a2e2dc781eb21\"" Apr 30 01:21:53.822747 containerd[1492]: time="2025-04-30T01:21:53.822602322Z" level=info msg="StartContainer for \"c474fc6b6d61d8520981050c201db4f29210f411b23ce6c2099a2e2dc781eb21\"" Apr 30 01:21:53.869022 systemd[1]: Started cri-containerd-c474fc6b6d61d8520981050c201db4f29210f411b23ce6c2099a2e2dc781eb21.scope - libcontainer container c474fc6b6d61d8520981050c201db4f29210f411b23ce6c2099a2e2dc781eb21. Apr 30 01:21:53.909140 containerd[1492]: time="2025-04-30T01:21:53.909094739Z" level=info msg="StartContainer for \"c474fc6b6d61d8520981050c201db4f29210f411b23ce6c2099a2e2dc781eb21\" returns successfully" Apr 30 01:21:53.912840 containerd[1492]: time="2025-04-30T01:21:53.912216469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 01:21:56.136058 containerd[1492]: time="2025-04-30T01:21:56.135992942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:56.137546 containerd[1492]: time="2025-04-30T01:21:56.137267740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 01:21:56.139107 containerd[1492]: time="2025-04-30T01:21:56.138602819Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:56.141959 containerd[1492]: time="2025-04-30T01:21:56.141890077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:56.143031 containerd[1492]: time="2025-04-30T01:21:56.142963749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 2.230699879s" Apr 30 01:21:56.143140 containerd[1492]: time="2025-04-30T01:21:56.143123634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 01:21:56.148201 containerd[1492]: time="2025-04-30T01:21:56.148136503Z" level=info msg="CreateContainer within sandbox \"bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 01:21:56.168734 containerd[1492]: time="2025-04-30T01:21:56.168657153Z" level=info msg="CreateContainer within sandbox \"bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"65c23c214c28ab6f004ef6af295b4b069d8c5f6d8d0fe4df1c05c2e96891d3eb\"" Apr 30 01:21:56.172180 containerd[1492]: time="2025-04-30T01:21:56.172135496Z" level=info msg="StartContainer for \"65c23c214c28ab6f004ef6af295b4b069d8c5f6d8d0fe4df1c05c2e96891d3eb\"" Apr 30 01:21:56.216938 systemd[1]: Started cri-containerd-65c23c214c28ab6f004ef6af295b4b069d8c5f6d8d0fe4df1c05c2e96891d3eb.scope - libcontainer container 65c23c214c28ab6f004ef6af295b4b069d8c5f6d8d0fe4df1c05c2e96891d3eb. Apr 30 01:21:56.247791 containerd[1492]: time="2025-04-30T01:21:56.247326811Z" level=info msg="StartContainer for \"65c23c214c28ab6f004ef6af295b4b069d8c5f6d8d0fe4df1c05c2e96891d3eb\" returns successfully" Apr 30 01:21:56.451564 kubelet[2758]: I0430 01:21:56.451107 2758 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 01:21:56.451564 kubelet[2758]: I0430 01:21:56.451155 2758 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 01:21:56.691739 kubelet[2758]: I0430 01:21:56.691315 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7snxl" podStartSLOduration=29.319312911 podStartE2EDuration="33.691294169s" podCreationTimestamp="2025-04-30 01:21:23 +0000 UTC" firstStartedPulling="2025-04-30 01:21:51.77194808 +0000 UTC m=+49.577190040" lastFinishedPulling="2025-04-30 01:21:56.143929338 +0000 UTC m=+53.949171298" observedRunningTime="2025-04-30 01:21:56.690964279 +0000 UTC m=+54.496206279" watchObservedRunningTime="2025-04-30 01:21:56.691294169 +0000 UTC m=+54.496536129" Apr 30 01:21:58.099306 kubelet[2758]: I0430 01:21:58.098804 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 01:22:00.261285 kubelet[2758]: I0430 01:22:00.260968 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 01:22:02.330475 containerd[1492]: time="2025-04-30T01:22:02.330264490Z" level=info msg="StopPodSandbox for \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\"" Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.393 [WARNING][5055] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0", GenerateName:"calico-apiserver-7845cbd476-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d8395dc-833d-4a09-97bb-4b7ca67c4458", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7845cbd476", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d", Pod:"calico-apiserver-7845cbd476-bf752", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2fff9e1599", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.393 [INFO][5055] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.393 [INFO][5055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" iface="eth0" netns="" Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.394 [INFO][5055] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.394 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.418 [INFO][5064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.418 [INFO][5064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.418 [INFO][5064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.428 [WARNING][5064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.429 [INFO][5064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.431 [INFO][5064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:02.436879 containerd[1492]: 2025-04-30 01:22:02.432 [INFO][5055] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:22:02.436879 containerd[1492]: time="2025-04-30T01:22:02.435983150Z" level=info msg="TearDown network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\" successfully" Apr 30 01:22:02.436879 containerd[1492]: time="2025-04-30T01:22:02.436007070Z" level=info msg="StopPodSandbox for \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\" returns successfully" Apr 30 01:22:02.436879 containerd[1492]: time="2025-04-30T01:22:02.436563088Z" level=info msg="RemovePodSandbox for \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\"" Apr 30 01:22:02.454916 containerd[1492]: time="2025-04-30T01:22:02.454522688Z" level=info msg="Forcibly stopping sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\"" Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.505 [WARNING][5082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0", GenerateName:"calico-apiserver-7845cbd476-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d8395dc-833d-4a09-97bb-4b7ca67c4458", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7845cbd476", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"c2a9bf7e2b54c08cc00a3f5cab4837e5ee07cb563e7f52337672a964cd9f951d", Pod:"calico-apiserver-7845cbd476-bf752", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2fff9e1599", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.506 [INFO][5082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.506 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" iface="eth0" netns="" Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.506 [INFO][5082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.506 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.528 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.528 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.529 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.541 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.541 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" HandleID="k8s-pod-network.d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--bf752-eth0" Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.545 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:02.549932 containerd[1492]: 2025-04-30 01:22:02.547 [INFO][5082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5" Apr 30 01:22:02.549932 containerd[1492]: time="2025-04-30T01:22:02.549558054Z" level=info msg="TearDown network for sandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\" successfully" Apr 30 01:22:02.556666 containerd[1492]: time="2025-04-30T01:22:02.555685925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:02.556666 containerd[1492]: time="2025-04-30T01:22:02.555808289Z" level=info msg="RemovePodSandbox \"d7119782fbcc1f2ec607f1dc0cc0adc01c670eeba15b4b28d092cc17748a0fa5\" returns successfully" Apr 30 01:22:02.558388 containerd[1492]: time="2025-04-30T01:22:02.557831312Z" level=info msg="StopPodSandbox for \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\"" Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.607 [WARNING][5108] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"73a62f7b-09e4-4b19-a621-c45cfcdd6957", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572", Pod:"coredns-7db6d8ff4d-kk96k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00fa1f1cc3d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.608 [INFO][5108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.608 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" iface="eth0" netns="" Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.608 [INFO][5108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.608 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.638 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.639 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.639 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.651 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.651 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.653 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:02.656636 containerd[1492]: 2025-04-30 01:22:02.655 [INFO][5108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:22:02.656636 containerd[1492]: time="2025-04-30T01:22:02.656622636Z" level=info msg="TearDown network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\" successfully" Apr 30 01:22:02.658313 containerd[1492]: time="2025-04-30T01:22:02.656650837Z" level=info msg="StopPodSandbox for \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\" returns successfully" Apr 30 01:22:02.659109 containerd[1492]: time="2025-04-30T01:22:02.658663459Z" level=info msg="RemovePodSandbox for \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\"" Apr 30 01:22:02.659109 containerd[1492]: time="2025-04-30T01:22:02.658801024Z" level=info msg="Forcibly stopping sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\"" Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.712 [WARNING][5133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"73a62f7b-09e4-4b19-a621-c45cfcdd6957", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"5e380decbbac7cb04b9dd17c104a3b8895447c8c3aa5f81f1561d11eea90a572", Pod:"coredns-7db6d8ff4d-kk96k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00fa1f1cc3d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.712 [INFO][5133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.712 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" iface="eth0" netns="" Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.712 [INFO][5133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.712 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.743 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.743 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.743 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.754 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.754 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" HandleID="k8s-pod-network.eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--kk96k-eth0" Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.757 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:02.760479 containerd[1492]: 2025-04-30 01:22:02.759 [INFO][5133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7" Apr 30 01:22:02.767008 containerd[1492]: time="2025-04-30T01:22:02.760542599Z" level=info msg="TearDown network for sandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\" successfully" Apr 30 01:22:02.772739 containerd[1492]: time="2025-04-30T01:22:02.772631656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:02.772905 containerd[1492]: time="2025-04-30T01:22:02.772777301Z" level=info msg="RemovePodSandbox \"eb1625c1090f95df54329eb545fb1465afff4ced3048622df412fb59fff27da7\" returns successfully" Apr 30 01:22:02.773365 containerd[1492]: time="2025-04-30T01:22:02.773336318Z" level=info msg="StopPodSandbox for \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\"" Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.822 [WARNING][5158] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c03937bb-3188-4349-9e30-94ddeb810bb2", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7", Pod:"csi-node-driver-7snxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7176af9993", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.823 [INFO][5158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.823 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" iface="eth0" netns="" Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.823 [INFO][5158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.823 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.856 [INFO][5165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.857 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.857 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.869 [WARNING][5165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.869 [INFO][5165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.872 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:02.875442 containerd[1492]: 2025-04-30 01:22:02.873 [INFO][5158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:22:02.875442 containerd[1492]: time="2025-04-30T01:22:02.875189297Z" level=info msg="TearDown network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\" successfully" Apr 30 01:22:02.875442 containerd[1492]: time="2025-04-30T01:22:02.875223098Z" level=info msg="StopPodSandbox for \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\" returns successfully" Apr 30 01:22:02.876840 containerd[1492]: time="2025-04-30T01:22:02.876600501Z" level=info msg="RemovePodSandbox for \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\"" Apr 30 01:22:02.876840 containerd[1492]: time="2025-04-30T01:22:02.876703824Z" level=info msg="Forcibly stopping sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\"" Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.930 [WARNING][5183] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c03937bb-3188-4349-9e30-94ddeb810bb2", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"bda968a4389ec69167687e9e71318eb1aa76e9dd6c5b60c13e79597ced2175d7", Pod:"csi-node-driver-7snxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7176af9993", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.931 [INFO][5183] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.931 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" iface="eth0" netns="" Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.931 [INFO][5183] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.931 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.955 [INFO][5190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.955 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.955 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.966 [WARNING][5190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.966 [INFO][5190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" HandleID="k8s-pod-network.1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Workload="ci--4081--3--3--a--62378e86a2-k8s-csi--node--driver--7snxl-eth0" Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.969 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:02.972537 containerd[1492]: 2025-04-30 01:22:02.970 [INFO][5183] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78" Apr 30 01:22:02.973372 containerd[1492]: time="2025-04-30T01:22:02.972563176Z" level=info msg="TearDown network for sandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\" successfully" Apr 30 01:22:02.979721 containerd[1492]: time="2025-04-30T01:22:02.979546074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:02.979864 containerd[1492]: time="2025-04-30T01:22:02.979819083Z" level=info msg="RemovePodSandbox \"1be74ea02fa46c2e90ac36e480c82a6819048f0924079753e232cdeed4069f78\" returns successfully" Apr 30 01:22:02.981242 containerd[1492]: time="2025-04-30T01:22:02.980670029Z" level=info msg="StopPodSandbox for \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\"" Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.027 [WARNING][5208] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0", GenerateName:"calico-kube-controllers-65bb9bf8f8-", Namespace:"calico-system", SelfLink:"", UID:"94b1a086-40fc-41bb-8514-0b3e4bfe8cc0", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65bb9bf8f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b", Pod:"calico-kube-controllers-65bb9bf8f8-mz854", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali148ea6d1dc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.027 [INFO][5208] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.027 [INFO][5208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" iface="eth0" netns="" Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.027 [INFO][5208] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.027 [INFO][5208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.062 [INFO][5215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.062 [INFO][5215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.062 [INFO][5215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.078 [WARNING][5215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.078 [INFO][5215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.080 [INFO][5215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:03.083661 containerd[1492]: 2025-04-30 01:22:03.082 [INFO][5208] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:22:03.084349 containerd[1492]: time="2025-04-30T01:22:03.083719704Z" level=info msg="TearDown network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\" successfully" Apr 30 01:22:03.084349 containerd[1492]: time="2025-04-30T01:22:03.083747745Z" level=info msg="StopPodSandbox for \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\" returns successfully" Apr 30 01:22:03.085048 containerd[1492]: time="2025-04-30T01:22:03.084629972Z" level=info msg="RemovePodSandbox for \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\"" Apr 30 01:22:03.085048 containerd[1492]: time="2025-04-30T01:22:03.084668813Z" level=info msg="Forcibly stopping sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\"" Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.124 [WARNING][5233] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0", GenerateName:"calico-kube-controllers-65bb9bf8f8-", Namespace:"calico-system", SelfLink:"", UID:"94b1a086-40fc-41bb-8514-0b3e4bfe8cc0", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65bb9bf8f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"cae257d58628b4d055e3467c4d415288ae22cbf67f9e9654ab5502c513c1456b", Pod:"calico-kube-controllers-65bb9bf8f8-mz854", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali148ea6d1dc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.124 [INFO][5233] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.124 [INFO][5233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" iface="eth0" netns="" Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.124 [INFO][5233] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.124 [INFO][5233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.146 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.147 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.147 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.159 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.159 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" HandleID="k8s-pod-network.dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--kube--controllers--65bb9bf8f8--mz854-eth0" Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.161 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:03.165352 containerd[1492]: 2025-04-30 01:22:03.163 [INFO][5233] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b" Apr 30 01:22:03.165908 containerd[1492]: time="2025-04-30T01:22:03.165404871Z" level=info msg="TearDown network for sandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\" successfully" Apr 30 01:22:03.169718 containerd[1492]: time="2025-04-30T01:22:03.169614483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:03.169831 containerd[1492]: time="2025-04-30T01:22:03.169802529Z" level=info msg="RemovePodSandbox \"dc2ef66e91f23f6e48da4379c4ed645c704e459feb8a2b2adf3f2f5c6b1d315b\" returns successfully" Apr 30 01:22:03.170764 containerd[1492]: time="2025-04-30T01:22:03.170675797Z" level=info msg="StopPodSandbox for \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\"" Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.218 [WARNING][5258] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"989be597-1452-421c-833d-fc10aad2a4c3", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35", Pod:"coredns-7db6d8ff4d-5hdsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali698fafdfc95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.219 [INFO][5258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.219 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" iface="eth0" netns="" Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.219 [INFO][5258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.219 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.244 [INFO][5265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.244 [INFO][5265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.244 [INFO][5265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.258 [WARNING][5265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.258 [INFO][5265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.260 [INFO][5265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:03.263399 containerd[1492]: 2025-04-30 01:22:03.261 [INFO][5258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:22:03.263399 containerd[1492]: time="2025-04-30T01:22:03.263367030Z" level=info msg="TearDown network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\" successfully" Apr 30 01:22:03.263399 containerd[1492]: time="2025-04-30T01:22:03.263402951Z" level=info msg="StopPodSandbox for \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\" returns successfully" Apr 30 01:22:03.265449 containerd[1492]: time="2025-04-30T01:22:03.265389454Z" level=info msg="RemovePodSandbox for \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\"" Apr 30 01:22:03.266406 containerd[1492]: time="2025-04-30T01:22:03.266369684Z" level=info msg="Forcibly stopping sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\"" Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.311 [WARNING][5283] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"989be597-1452-421c-833d-fc10aad2a4c3", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"159acd050b7c3c4e727290812103ff0cb05d51d92fc9a52df2c996332f81de35", Pod:"coredns-7db6d8ff4d-5hdsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali698fafdfc95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.312 [INFO][5283] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.312 [INFO][5283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" iface="eth0" netns="" Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.312 [INFO][5283] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.312 [INFO][5283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.335 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.336 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.336 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.350 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.350 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" HandleID="k8s-pod-network.e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Workload="ci--4081--3--3--a--62378e86a2-k8s-coredns--7db6d8ff4d--5hdsg-eth0" Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.352 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:03.354959 containerd[1492]: 2025-04-30 01:22:03.353 [INFO][5283] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67" Apr 30 01:22:03.355638 containerd[1492]: time="2025-04-30T01:22:03.354994670Z" level=info msg="TearDown network for sandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\" successfully" Apr 30 01:22:03.367831 containerd[1492]: time="2025-04-30T01:22:03.367696309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:03.367831 containerd[1492]: time="2025-04-30T01:22:03.367836954Z" level=info msg="RemovePodSandbox \"e24843893f3a2b8c0fd2b17a7bd68ad35afb56a90cdcd9bde6776ff7fd1dfe67\" returns successfully" Apr 30 01:22:03.368592 containerd[1492]: time="2025-04-30T01:22:03.368539336Z" level=info msg="StopPodSandbox for \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\"" Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.414 [WARNING][5310] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0", GenerateName:"calico-apiserver-7845cbd476-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0c262a3-b472-411f-9366-fa54ff571684", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7845cbd476", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250", Pod:"calico-apiserver-7845cbd476-mzk7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91d2100b650", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.414 [INFO][5310] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.414 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" iface="eth0" netns="" Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.414 [INFO][5310] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.414 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.439 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.439 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.439 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.453 [WARNING][5317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.453 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.457 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:03.462124 containerd[1492]: 2025-04-30 01:22:03.459 [INFO][5310] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:22:03.463376 containerd[1492]: time="2025-04-30T01:22:03.462168399Z" level=info msg="TearDown network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\" successfully" Apr 30 01:22:03.463376 containerd[1492]: time="2025-04-30T01:22:03.462197319Z" level=info msg="StopPodSandbox for \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\" returns successfully" Apr 30 01:22:03.463376 containerd[1492]: time="2025-04-30T01:22:03.462838300Z" level=info msg="RemovePodSandbox for \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\"" Apr 30 01:22:03.463376 containerd[1492]: time="2025-04-30T01:22:03.462885981Z" level=info msg="Forcibly stopping sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\"" Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.529 [WARNING][5335] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0", GenerateName:"calico-apiserver-7845cbd476-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0c262a3-b472-411f-9366-fa54ff571684", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7845cbd476", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-a-62378e86a2", ContainerID:"93f655677b8a64dba656ee85f2c125130a33dda4aa3c16bd28ecfdbb51728250", Pod:"calico-apiserver-7845cbd476-mzk7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91d2100b650", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.529 [INFO][5335] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.529 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" iface="eth0" netns="" Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.529 [INFO][5335] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.529 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.563 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.563 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.563 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.576 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.576 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" HandleID="k8s-pod-network.73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Workload="ci--4081--3--3--a--62378e86a2-k8s-calico--apiserver--7845cbd476--mzk7k-eth0" Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.578 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:03.583305 containerd[1492]: 2025-04-30 01:22:03.580 [INFO][5335] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06" Apr 30 01:22:03.584050 containerd[1492]: time="2025-04-30T01:22:03.583819462Z" level=info msg="TearDown network for sandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\" successfully" Apr 30 01:22:03.589964 containerd[1492]: time="2025-04-30T01:22:03.589890933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:03.590240 containerd[1492]: time="2025-04-30T01:22:03.590018537Z" level=info msg="RemovePodSandbox \"73bb357396dd544dd146f944f9cfd4f2b4577bbbd24acea13c493f7eaef7dc06\" returns successfully" Apr 30 01:22:18.734127 systemd[1]: run-containerd-runc-k8s.io-f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a-runc.RGKGvq.mount: Deactivated successfully. Apr 30 01:24:04.200839 systemd[1]: run-containerd-runc-k8s.io-f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a-runc.iAqIiY.mount: Deactivated successfully. Apr 30 01:24:18.734122 systemd[1]: run-containerd-runc-k8s.io-f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a-runc.WYaJKl.mount: Deactivated successfully. Apr 30 01:24:35.814495 systemd[1]: run-containerd-runc-k8s.io-7c705c3698d18d9083f9241adc8656b0ee0688f893fbc4392f228ccbb04b2b6c-runc.N22grr.mount: Deactivated successfully. Apr 30 01:25:04.202660 systemd[1]: run-containerd-runc-k8s.io-f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a-runc.olCVYh.mount: Deactivated successfully. Apr 30 01:25:35.813545 systemd[1]: run-containerd-runc-k8s.io-7c705c3698d18d9083f9241adc8656b0ee0688f893fbc4392f228ccbb04b2b6c-runc.mlsvWd.mount: Deactivated successfully. Apr 30 01:25:45.813217 systemd[1]: Started sshd@7-168.119.50.83:22-139.178.68.195:35828.service - OpenSSH per-connection server daemon (139.178.68.195:35828). Apr 30 01:25:46.806310 sshd[5852]: Accepted publickey for core from 139.178.68.195 port 35828 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:25:46.810730 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:25:46.817206 systemd-logind[1460]: New session 8 of user core. Apr 30 01:25:46.824682 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 01:25:47.591854 sshd[5852]: pam_unix(sshd:session): session closed for user core Apr 30 01:25:47.598355 systemd[1]: sshd@7-168.119.50.83:22-139.178.68.195:35828.service: Deactivated successfully. Apr 30 01:25:47.601919 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 01:25:47.602977 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Apr 30 01:25:47.604595 systemd-logind[1460]: Removed session 8. Apr 30 01:25:52.769219 systemd[1]: Started sshd@8-168.119.50.83:22-139.178.68.195:35832.service - OpenSSH per-connection server daemon (139.178.68.195:35832). Apr 30 01:25:53.747808 sshd[5869]: Accepted publickey for core from 139.178.68.195 port 35832 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:25:53.750139 sshd[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:25:53.756086 systemd-logind[1460]: New session 9 of user core. Apr 30 01:25:53.762985 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 01:25:54.509993 sshd[5869]: pam_unix(sshd:session): session closed for user core Apr 30 01:25:54.514973 systemd[1]: sshd@8-168.119.50.83:22-139.178.68.195:35832.service: Deactivated successfully. Apr 30 01:25:54.519692 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 01:25:54.521210 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Apr 30 01:25:54.524666 systemd-logind[1460]: Removed session 9. Apr 30 01:25:59.692081 systemd[1]: Started sshd@9-168.119.50.83:22-139.178.68.195:49682.service - OpenSSH per-connection server daemon (139.178.68.195:49682). Apr 30 01:26:00.683554 sshd[5883]: Accepted publickey for core from 139.178.68.195 port 49682 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:00.686491 sshd[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:00.693414 systemd-logind[1460]: New session 10 of user core. Apr 30 01:26:00.696906 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 01:26:01.445221 sshd[5883]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:01.451687 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Apr 30 01:26:01.452077 systemd[1]: sshd@9-168.119.50.83:22-139.178.68.195:49682.service: Deactivated successfully. Apr 30 01:26:01.454538 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 01:26:01.455984 systemd-logind[1460]: Removed session 10. Apr 30 01:26:01.623070 systemd[1]: Started sshd@10-168.119.50.83:22-139.178.68.195:49686.service - OpenSSH per-connection server daemon (139.178.68.195:49686). Apr 30 01:26:02.620403 sshd[5897]: Accepted publickey for core from 139.178.68.195 port 49686 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:02.623183 sshd[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:02.628070 systemd-logind[1460]: New session 11 of user core. Apr 30 01:26:02.636250 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 01:26:03.427381 sshd[5897]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:03.432861 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Apr 30 01:26:03.433159 systemd[1]: sshd@10-168.119.50.83:22-139.178.68.195:49686.service: Deactivated successfully. Apr 30 01:26:03.436520 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 01:26:03.439389 systemd-logind[1460]: Removed session 11. Apr 30 01:26:03.603978 systemd[1]: Started sshd@11-168.119.50.83:22-139.178.68.195:49696.service - OpenSSH per-connection server daemon (139.178.68.195:49696). Apr 30 01:26:04.590807 sshd[5910]: Accepted publickey for core from 139.178.68.195 port 49696 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:04.593397 sshd[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:04.598976 systemd-logind[1460]: New session 12 of user core. Apr 30 01:26:04.608063 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 01:26:05.362267 sshd[5910]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:05.367934 systemd[1]: sshd@11-168.119.50.83:22-139.178.68.195:49696.service: Deactivated successfully. Apr 30 01:26:05.371211 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 01:26:05.372532 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Apr 30 01:26:05.374521 systemd-logind[1460]: Removed session 12. Apr 30 01:26:10.540065 systemd[1]: Started sshd@12-168.119.50.83:22-139.178.68.195:32972.service - OpenSSH per-connection server daemon (139.178.68.195:32972). Apr 30 01:26:11.538621 sshd[5966]: Accepted publickey for core from 139.178.68.195 port 32972 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:11.540026 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:11.551519 systemd-logind[1460]: New session 13 of user core. Apr 30 01:26:11.555929 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 01:26:12.302166 sshd[5966]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:12.309144 systemd[1]: sshd@12-168.119.50.83:22-139.178.68.195:32972.service: Deactivated successfully. Apr 30 01:26:12.313682 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 01:26:12.315047 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Apr 30 01:26:12.316633 systemd-logind[1460]: Removed session 13. Apr 30 01:26:12.477101 systemd[1]: Started sshd@13-168.119.50.83:22-139.178.68.195:32986.service - OpenSSH per-connection server daemon (139.178.68.195:32986). Apr 30 01:26:13.478242 sshd[5978]: Accepted publickey for core from 139.178.68.195 port 32986 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:13.481737 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:13.489773 systemd-logind[1460]: New session 14 of user core. Apr 30 01:26:13.494187 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 01:26:14.366123 sshd[5978]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:14.371500 systemd[1]: sshd@13-168.119.50.83:22-139.178.68.195:32986.service: Deactivated successfully. Apr 30 01:26:14.374837 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 01:26:14.376911 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Apr 30 01:26:14.378998 systemd-logind[1460]: Removed session 14. Apr 30 01:26:14.541841 systemd[1]: Started sshd@14-168.119.50.83:22-139.178.68.195:32990.service - OpenSSH per-connection server daemon (139.178.68.195:32990). Apr 30 01:26:15.526732 sshd[5990]: Accepted publickey for core from 139.178.68.195 port 32990 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:15.528414 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:15.537209 systemd-logind[1460]: New session 15 of user core. Apr 30 01:26:15.540736 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 01:26:18.459074 sshd[5990]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:18.467336 systemd[1]: sshd@14-168.119.50.83:22-139.178.68.195:32990.service: Deactivated successfully. Apr 30 01:26:18.472373 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 01:26:18.473599 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Apr 30 01:26:18.477261 systemd-logind[1460]: Removed session 15. Apr 30 01:26:18.636158 systemd[1]: Started sshd@15-168.119.50.83:22-139.178.68.195:46108.service - OpenSSH per-connection server daemon (139.178.68.195:46108). Apr 30 01:26:19.621657 sshd[6013]: Accepted publickey for core from 139.178.68.195 port 46108 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:19.624298 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:19.630430 systemd-logind[1460]: New session 16 of user core. Apr 30 01:26:19.636016 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 01:26:20.517329 sshd[6013]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:20.523510 systemd[1]: sshd@15-168.119.50.83:22-139.178.68.195:46108.service: Deactivated successfully. Apr 30 01:26:20.526632 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 01:26:20.529055 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Apr 30 01:26:20.530676 systemd-logind[1460]: Removed session 16. Apr 30 01:26:20.700830 systemd[1]: Started sshd@16-168.119.50.83:22-139.178.68.195:46122.service - OpenSSH per-connection server daemon (139.178.68.195:46122). Apr 30 01:26:21.674249 sshd[6045]: Accepted publickey for core from 139.178.68.195 port 46122 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:21.676251 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:21.683910 systemd-logind[1460]: New session 17 of user core. Apr 30 01:26:21.690090 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 01:26:22.423984 sshd[6045]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:22.430554 systemd[1]: sshd@16-168.119.50.83:22-139.178.68.195:46122.service: Deactivated successfully. Apr 30 01:26:22.435993 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 01:26:22.438405 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Apr 30 01:26:22.439631 systemd-logind[1460]: Removed session 17. Apr 30 01:26:27.606220 systemd[1]: Started sshd@17-168.119.50.83:22-139.178.68.195:60780.service - OpenSSH per-connection server daemon (139.178.68.195:60780). Apr 30 01:26:28.605546 sshd[6063]: Accepted publickey for core from 139.178.68.195 port 60780 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:28.606996 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:28.612939 systemd-logind[1460]: New session 18 of user core. Apr 30 01:26:28.619130 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 01:26:29.375464 sshd[6063]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:29.380138 systemd[1]: sshd@17-168.119.50.83:22-139.178.68.195:60780.service: Deactivated successfully. Apr 30 01:26:29.384564 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 01:26:29.386837 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Apr 30 01:26:29.388486 systemd-logind[1460]: Removed session 18. Apr 30 01:26:34.205908 systemd[1]: run-containerd-runc-k8s.io-f6c617023ff25f6f86ac7d0e79ac0a08c385cc903b42b5f33063093ce8bf7d9a-runc.pXgPaI.mount: Deactivated successfully. Apr 30 01:26:34.558937 systemd[1]: Started sshd@18-168.119.50.83:22-139.178.68.195:60792.service - OpenSSH per-connection server daemon (139.178.68.195:60792). Apr 30 01:26:35.550121 sshd[6107]: Accepted publickey for core from 139.178.68.195 port 60792 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:35.552399 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:35.558351 systemd-logind[1460]: New session 19 of user core. Apr 30 01:26:35.564034 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 01:26:36.303092 sshd[6107]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:36.308804 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Apr 30 01:26:36.309531 systemd[1]: sshd@18-168.119.50.83:22-139.178.68.195:60792.service: Deactivated successfully. Apr 30 01:26:36.314590 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 01:26:36.322306 systemd-logind[1460]: Removed session 19. Apr 30 01:26:41.486628 systemd[1]: Started sshd@19-168.119.50.83:22-139.178.68.195:44062.service - OpenSSH per-connection server daemon (139.178.68.195:44062). Apr 30 01:26:42.468941 sshd[6145]: Accepted publickey for core from 139.178.68.195 port 44062 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:42.471246 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:42.477540 systemd-logind[1460]: New session 20 of user core. Apr 30 01:26:42.486062 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 01:26:43.237053 sshd[6145]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:43.240339 systemd[1]: sshd@19-168.119.50.83:22-139.178.68.195:44062.service: Deactivated successfully. Apr 30 01:26:43.245589 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 01:26:43.247697 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Apr 30 01:26:43.249151 systemd-logind[1460]: Removed session 20. Apr 30 01:26:48.414204 systemd[1]: Started sshd@20-168.119.50.83:22-139.178.68.195:40380.service - OpenSSH per-connection server daemon (139.178.68.195:40380). Apr 30 01:26:49.407913 sshd[6160]: Accepted publickey for core from 139.178.68.195 port 40380 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:49.409966 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:49.417150 systemd-logind[1460]: New session 21 of user core. Apr 30 01:26:49.427104 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 01:26:50.182917 sshd[6160]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:50.192622 systemd[1]: sshd@20-168.119.50.83:22-139.178.68.195:40380.service: Deactivated successfully. Apr 30 01:26:50.195373 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 01:26:50.197576 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Apr 30 01:26:50.200039 systemd-logind[1460]: Removed session 21. Apr 30 01:26:55.357242 systemd[1]: Started sshd@21-168.119.50.83:22-139.178.68.195:35742.service - OpenSSH per-connection server daemon (139.178.68.195:35742). Apr 30 01:26:56.331895 sshd[6175]: Accepted publickey for core from 139.178.68.195 port 35742 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:56.335327 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:56.340705 systemd-logind[1460]: New session 22 of user core. Apr 30 01:26:56.343919 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 01:26:57.083279 sshd[6175]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:57.090665 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Apr 30 01:26:57.091126 systemd[1]: sshd@21-168.119.50.83:22-139.178.68.195:35742.service: Deactivated successfully. Apr 30 01:26:57.094175 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 01:26:57.096488 systemd-logind[1460]: Removed session 22. Apr 30 01:27:12.790718 systemd[1]: cri-containerd-4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296.scope: Deactivated successfully. Apr 30 01:27:12.793065 systemd[1]: cri-containerd-4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296.scope: Consumed 6.266s CPU time, 24.5M memory peak, 0B memory swap peak. Apr 30 01:27:12.818977 containerd[1492]: time="2025-04-30T01:27:12.818681894Z" level=info msg="shim disconnected" id=4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296 namespace=k8s.io Apr 30 01:27:12.818977 containerd[1492]: time="2025-04-30T01:27:12.818966753Z" level=warning msg="cleaning up after shim disconnected" id=4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296 namespace=k8s.io Apr 30 01:27:12.818977 containerd[1492]: time="2025-04-30T01:27:12.818977352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:27:12.820527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296-rootfs.mount: Deactivated successfully. Apr 30 01:27:13.172030 systemd[1]: cri-containerd-016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72.scope: Deactivated successfully. Apr 30 01:27:13.172739 systemd[1]: cri-containerd-016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72.scope: Consumed 6.166s CPU time. Apr 30 01:27:13.198500 containerd[1492]: time="2025-04-30T01:27:13.198353364Z" level=info msg="shim disconnected" id=016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72 namespace=k8s.io Apr 30 01:27:13.198500 containerd[1492]: time="2025-04-30T01:27:13.198432398Z" level=warning msg="cleaning up after shim disconnected" id=016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72 namespace=k8s.io Apr 30 01:27:13.198500 containerd[1492]: time="2025-04-30T01:27:13.198442077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:27:13.199574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72-rootfs.mount: Deactivated successfully. Apr 30 01:27:13.229798 kubelet[2758]: E0430 01:27:13.229647 2758 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43750->10.0.0.2:2379: read: connection timed out" Apr 30 01:27:13.232739 systemd[1]: cri-containerd-1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7.scope: Deactivated successfully. Apr 30 01:27:13.233096 systemd[1]: cri-containerd-1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7.scope: Consumed 2.470s CPU time, 16.0M memory peak, 0B memory swap peak. Apr 30 01:27:13.257560 containerd[1492]: time="2025-04-30T01:27:13.257483364Z" level=info msg="shim disconnected" id=1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7 namespace=k8s.io Apr 30 01:27:13.257560 containerd[1492]: time="2025-04-30T01:27:13.257555679Z" level=warning msg="cleaning up after shim disconnected" id=1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7 namespace=k8s.io Apr 30 01:27:13.257560 containerd[1492]: time="2025-04-30T01:27:13.257564918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:27:13.258545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7-rootfs.mount: Deactivated successfully. Apr 30 01:27:13.622616 kubelet[2758]: I0430 01:27:13.622583 2758 scope.go:117] "RemoveContainer" containerID="4740d04cdd61e0e77059a38f135075e629ba577a4b4835f67444417e171e8296" Apr 30 01:27:13.627307 kubelet[2758]: I0430 01:27:13.627275 2758 scope.go:117] "RemoveContainer" containerID="1fc2482dc371c0d738401663ea8c86e052f637203af8ad5ff576f568421142b7" Apr 30 01:27:13.629267 containerd[1492]: time="2025-04-30T01:27:13.629184896Z" level=info msg="CreateContainer within sandbox \"3f6378f03c0be40c4bbe18ce0677cef923e0465b40d3fd27f149cb2c8b7073ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 01:27:13.630958 containerd[1492]: time="2025-04-30T01:27:13.630881969Z" level=info msg="CreateContainer within sandbox \"d52eef805bf222c6a5e93d4d30b35687fa4bd20dc2b82814803b8c58375e8b67\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 01:27:13.647936 kubelet[2758]: I0430 01:27:13.647376 2758 scope.go:117] "RemoveContainer" containerID="016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72" Apr 30 01:27:13.674578 containerd[1492]: time="2025-04-30T01:27:13.674462857Z" level=info msg="CreateContainer within sandbox \"d52eef805bf222c6a5e93d4d30b35687fa4bd20dc2b82814803b8c58375e8b67\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"35ab8539b6b699925aa0b21b007baf23071b6f952f8f5c3d1d4826e21d19d45c\"" Apr 30 01:27:13.675485 containerd[1492]: time="2025-04-30T01:27:13.675041653Z" level=info msg="StartContainer for \"35ab8539b6b699925aa0b21b007baf23071b6f952f8f5c3d1d4826e21d19d45c\"" Apr 30 01:27:13.676519 containerd[1492]: time="2025-04-30T01:27:13.676363634Z" level=info msg="CreateContainer within sandbox \"34ca3723cce97b7019472c5ce06b58bba4c2320b86a9f61541ffd753cc71e8e2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 01:27:13.689886 containerd[1492]: time="2025-04-30T01:27:13.689690633Z" level=info msg="CreateContainer within sandbox \"3f6378f03c0be40c4bbe18ce0677cef923e0465b40d3fd27f149cb2c8b7073ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"87aed0c6e2026326f9f1b4a5be95b51688a384e9cffd49c7c51a71ca898cfbf4\"" Apr 30 01:27:13.690568 containerd[1492]: time="2025-04-30T01:27:13.690390221Z" level=info msg="StartContainer for \"87aed0c6e2026326f9f1b4a5be95b51688a384e9cffd49c7c51a71ca898cfbf4\"" Apr 30 01:27:13.698304 containerd[1492]: time="2025-04-30T01:27:13.698218753Z" level=info msg="CreateContainer within sandbox \"34ca3723cce97b7019472c5ce06b58bba4c2320b86a9f61541ffd753cc71e8e2\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302\"" Apr 30 01:27:13.699201 containerd[1492]: time="2025-04-30T01:27:13.698994935Z" level=info msg="StartContainer for \"93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302\"" Apr 30 01:27:13.711959 systemd[1]: Started cri-containerd-35ab8539b6b699925aa0b21b007baf23071b6f952f8f5c3d1d4826e21d19d45c.scope - libcontainer container 35ab8539b6b699925aa0b21b007baf23071b6f952f8f5c3d1d4826e21d19d45c. Apr 30 01:27:13.738189 systemd[1]: Started cri-containerd-87aed0c6e2026326f9f1b4a5be95b51688a384e9cffd49c7c51a71ca898cfbf4.scope - libcontainer container 87aed0c6e2026326f9f1b4a5be95b51688a384e9cffd49c7c51a71ca898cfbf4. Apr 30 01:27:13.751012 systemd[1]: Started cri-containerd-93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302.scope - libcontainer container 93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302. Apr 30 01:27:13.794046 containerd[1492]: time="2025-04-30T01:27:13.793755540Z" level=info msg="StartContainer for \"35ab8539b6b699925aa0b21b007baf23071b6f952f8f5c3d1d4826e21d19d45c\" returns successfully" Apr 30 01:27:13.805898 containerd[1492]: time="2025-04-30T01:27:13.805514737Z" level=info msg="StartContainer for \"87aed0c6e2026326f9f1b4a5be95b51688a384e9cffd49c7c51a71ca898cfbf4\" returns successfully" Apr 30 01:27:13.838222 containerd[1492]: time="2025-04-30T01:27:13.838066573Z" level=info msg="StartContainer for \"93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302\" returns successfully" Apr 30 01:27:15.978790 kubelet[2758]: E0430 01:27:15.978398 2758 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-04-30T01:27:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-30T01:27:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-30T01:27:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-30T01:27:09Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4081-3-3-a-62378e86a2\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43652->10.0.0.2:2379: read: connection timed out" Apr 30 01:27:15.980127 systemd[1]: cri-containerd-93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302.scope: Deactivated successfully. Apr 30 01:27:16.005240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302-rootfs.mount: Deactivated successfully. Apr 30 01:27:16.022329 containerd[1492]: time="2025-04-30T01:27:16.022249492Z" level=info msg="shim disconnected" id=93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302 namespace=k8s.io Apr 30 01:27:16.022329 containerd[1492]: time="2025-04-30T01:27:16.022313647Z" level=warning msg="cleaning up after shim disconnected" id=93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302 namespace=k8s.io Apr 30 01:27:16.022329 containerd[1492]: time="2025-04-30T01:27:16.022332286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:27:16.670224 kubelet[2758]: I0430 01:27:16.670168 2758 scope.go:117] "RemoveContainer" containerID="016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72" Apr 30 01:27:16.671091 kubelet[2758]: I0430 01:27:16.670773 2758 scope.go:117] "RemoveContainer" containerID="93da3dfa0f331507eb4b950c346911599d066ef2efef4bc62b0f967632f70302" Apr 30 01:27:16.671091 kubelet[2758]: E0430 01:27:16.671031 2758 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-797db67f8-vtbfr_tigera-operator(cfb89852-9dcb-470d-bef1-72fdd1c8494a)\"" pod="tigera-operator/tigera-operator-797db67f8-vtbfr" podUID="cfb89852-9dcb-470d-bef1-72fdd1c8494a" Apr 30 01:27:16.671658 containerd[1492]: time="2025-04-30T01:27:16.671625169Z" level=info msg="RemoveContainer for \"016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72\"" Apr 30 01:27:16.675655 containerd[1492]: time="2025-04-30T01:27:16.675614719Z" level=info msg="RemoveContainer for \"016bd2b6f28549fb020c5a94d7d58787260ecf750f9704e7ac00acdb6c729b72\" returns successfully" Apr 30 01:27:16.937781 kubelet[2758]: E0430 01:27:16.936624 2758 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43536->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-a-62378e86a2.183af44f2fc97b70 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-a-62378e86a2,UID:c49471ec7bf784d03abb758e28b0c06d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-a-62378e86a2,},FirstTimestamp:2025-04-30 01:27:06.500815728 +0000 UTC m=+364.306057688,LastTimestamp:2025-04-30 01:27:06.500815728 +0000 UTC m=+364.306057688,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-a-62378e86a2,}"