Apr 30 01:18:26.898938 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 01:18:26.898964 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 01:18:26.898975 kernel: KASLR enabled Apr 30 01:18:26.898981 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 01:18:26.898986 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 30 01:18:26.898992 kernel: random: crng init done Apr 30 01:18:26.898999 kernel: ACPI: Early table checksum verification disabled Apr 30 01:18:26.899004 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 01:18:26.899388 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 01:18:26.899404 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899410 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899416 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899422 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899428 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899436 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899444 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899450 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899456 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 01:18:26.899463 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 01:18:26.899469 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 01:18:26.899475 kernel: NUMA: Failed to initialise from firmware Apr 30 01:18:26.899481 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 01:18:26.899488 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 01:18:26.899494 kernel: Zone ranges: Apr 30 01:18:26.899500 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 01:18:26.899507 kernel: DMA32 empty Apr 30 01:18:26.899514 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 01:18:26.899520 kernel: Movable zone start for each node Apr 30 01:18:26.899526 kernel: Early memory node ranges Apr 30 01:18:26.899532 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 30 01:18:26.899539 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 01:18:26.899545 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 01:18:26.899551 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 01:18:26.899557 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 01:18:26.899564 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 01:18:26.899570 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 01:18:26.899576 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 01:18:26.899583 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 01:18:26.899590 kernel: psci: probing for conduit method from ACPI. Apr 30 01:18:26.899596 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 01:18:26.899605 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 01:18:26.899612 kernel: psci: Trusted OS migration not required Apr 30 01:18:26.899618 kernel: psci: SMC Calling Convention v1.1 Apr 30 01:18:26.899626 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 01:18:26.899633 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 01:18:26.899640 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 01:18:26.899647 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 01:18:26.899653 kernel: Detected PIPT I-cache on CPU0 Apr 30 01:18:26.899660 kernel: CPU features: detected: GIC system register CPU interface Apr 30 01:18:26.899667 kernel: CPU features: detected: Hardware dirty bit management Apr 30 01:18:26.899673 kernel: CPU features: detected: Spectre-v4 Apr 30 01:18:26.899680 kernel: CPU features: detected: Spectre-BHB Apr 30 01:18:26.899686 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 01:18:26.899695 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 01:18:26.899701 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 01:18:26.899708 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 01:18:26.899714 kernel: alternatives: applying boot alternatives Apr 30 01:18:26.899722 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 01:18:26.899730 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 01:18:26.899736 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 01:18:26.899743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 01:18:26.899750 kernel: Fallback order for Node 0: 0 Apr 30 01:18:26.899756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 01:18:26.899763 kernel: Policy zone: Normal Apr 30 01:18:26.899771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 01:18:26.899777 kernel: software IO TLB: area num 2. Apr 30 01:18:26.899784 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 01:18:26.899791 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) Apr 30 01:18:26.899798 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 01:18:26.899805 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 01:18:26.899812 kernel: rcu: RCU event tracing is enabled. Apr 30 01:18:26.899819 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 01:18:26.899826 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 01:18:26.899833 kernel: Tracing variant of Tasks RCU enabled. Apr 30 01:18:26.899840 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 01:18:26.899848 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 01:18:26.899855 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 01:18:26.899861 kernel: GICv3: 256 SPIs implemented Apr 30 01:18:26.899868 kernel: GICv3: 0 Extended SPIs implemented Apr 30 01:18:26.899874 kernel: Root IRQ handler: gic_handle_irq Apr 30 01:18:26.899881 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 01:18:26.899888 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 01:18:26.899894 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 01:18:26.899901 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 01:18:26.899908 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 01:18:26.899915 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 01:18:26.899922 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 01:18:26.899930 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 01:18:26.899937 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 01:18:26.899944 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 01:18:26.899950 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 01:18:26.899957 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 01:18:26.899964 kernel: Console: colour dummy device 80x25 Apr 30 01:18:26.899971 kernel: ACPI: Core revision 20230628 Apr 30 01:18:26.899979 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 01:18:26.899985 kernel: pid_max: default: 32768 minimum: 301 Apr 30 01:18:26.899992 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 01:18:26.900000 kernel: landlock: Up and running. Apr 30 01:18:26.900008 kernel: SELinux: Initializing. Apr 30 01:18:26.901418 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 01:18:26.901428 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 01:18:26.901435 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 01:18:26.901443 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 01:18:26.901450 kernel: rcu: Hierarchical SRCU implementation. Apr 30 01:18:26.901458 kernel: rcu: Max phase no-delay instances is 400. Apr 30 01:18:26.901465 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 01:18:26.901478 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 01:18:26.901485 kernel: Remapping and enabling EFI services. Apr 30 01:18:26.901492 kernel: smp: Bringing up secondary CPUs ... Apr 30 01:18:26.901498 kernel: Detected PIPT I-cache on CPU1 Apr 30 01:18:26.901505 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 01:18:26.901512 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 01:18:26.901519 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 01:18:26.901526 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 01:18:26.901534 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 01:18:26.901541 kernel: SMP: Total of 2 processors activated. Apr 30 01:18:26.901549 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 01:18:26.901557 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 01:18:26.901569 kernel: CPU features: detected: Common not Private translations Apr 30 01:18:26.901577 kernel: CPU features: detected: CRC32 instructions Apr 30 01:18:26.901584 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 01:18:26.901592 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 01:18:26.901599 kernel: CPU features: detected: LSE atomic instructions Apr 30 01:18:26.901606 kernel: CPU features: detected: Privileged Access Never Apr 30 01:18:26.901614 kernel: CPU features: detected: RAS Extension Support Apr 30 01:18:26.901622 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 01:18:26.901630 kernel: CPU: All CPU(s) started at EL1 Apr 30 01:18:26.901637 kernel: alternatives: applying system-wide alternatives Apr 30 01:18:26.901644 kernel: devtmpfs: initialized Apr 30 01:18:26.901652 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 01:18:26.901660 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 01:18:26.901667 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 01:18:26.901674 kernel: SMBIOS 3.0.0 present. Apr 30 01:18:26.901683 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 01:18:26.901690 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 01:18:26.901698 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 01:18:26.901705 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 01:18:26.901713 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 01:18:26.901720 kernel: audit: initializing netlink subsys (disabled) Apr 30 01:18:26.901727 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Apr 30 01:18:26.901734 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 01:18:26.901743 kernel: cpuidle: using governor menu Apr 30 01:18:26.901751 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 01:18:26.901758 kernel: ASID allocator initialised with 32768 entries Apr 30 01:18:26.901765 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 01:18:26.901772 kernel: Serial: AMBA PL011 UART driver Apr 30 01:18:26.901780 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 01:18:26.901787 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 01:18:26.901795 kernel: Modules: 509024 pages in range for PLT usage Apr 30 01:18:26.901802 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 01:18:26.901809 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 01:18:26.901818 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 01:18:26.901825 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 01:18:26.901833 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 01:18:26.901840 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 01:18:26.901847 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 01:18:26.901862 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 01:18:26.901869 kernel: ACPI: Added _OSI(Module Device) Apr 30 01:18:26.901876 kernel: ACPI: Added _OSI(Processor Device) Apr 30 01:18:26.901883 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 01:18:26.901892 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 01:18:26.901900 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 01:18:26.901907 kernel: ACPI: Interpreter enabled Apr 30 01:18:26.901914 kernel: ACPI: Using GIC for interrupt routing Apr 30 01:18:26.901922 kernel: ACPI: MCFG table detected, 1 entries Apr 30 01:18:26.901929 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 01:18:26.901936 kernel: printk: console [ttyAMA0] enabled Apr 30 01:18:26.901943 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 01:18:26.903137 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 01:18:26.903240 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 01:18:26.903403 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 01:18:26.903478 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 01:18:26.903543 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 01:18:26.903553 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 01:18:26.903561 kernel: PCI host bridge to bus 0000:00 Apr 30 01:18:26.903635 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 01:18:26.903700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 01:18:26.903758 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 01:18:26.903815 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 01:18:26.903896 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 01:18:26.903973 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 01:18:26.906132 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 01:18:26.906226 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 01:18:26.906358 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.906433 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 01:18:26.906512 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.906578 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 01:18:26.906651 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.906725 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 01:18:26.906796 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.906861 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 01:18:26.906933 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.906998 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 01:18:26.907095 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.907185 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 01:18:26.907270 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.907347 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 01:18:26.907421 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.907487 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 01:18:26.907558 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 01:18:26.907628 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 01:18:26.907706 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 01:18:26.907781 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 01:18:26.907865 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 01:18:26.907940 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 01:18:26.908710 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 01:18:26.908820 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 01:18:26.908909 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 01:18:26.908983 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 01:18:26.909117 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 01:18:26.909192 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 01:18:26.909292 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 01:18:26.909385 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 01:18:26.909463 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 01:18:26.909540 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 01:18:26.909609 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 01:18:26.909686 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 01:18:26.909763 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 01:18:26.909839 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 01:18:26.909926 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 01:18:26.910080 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 01:18:26.910159 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 01:18:26.910226 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 01:18:26.910305 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 01:18:26.910376 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 01:18:26.910441 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 01:18:26.910510 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 01:18:26.910578 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 01:18:26.910649 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 01:18:26.910715 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 01:18:26.910785 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 01:18:26.910849 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 01:18:26.910913 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 01:18:26.910985 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 01:18:26.911061 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 01:18:26.911128 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 01:18:26.911214 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 01:18:26.911304 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 01:18:26.911375 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 01:18:26.911454 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 01:18:26.911535 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 01:18:26.911613 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 01:18:26.911692 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 01:18:26.911758 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 01:18:26.911846 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 01:18:26.911926 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 01:18:26.911996 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 01:18:26.912073 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 01:18:26.912148 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 01:18:26.912214 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 01:18:26.912313 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 01:18:26.912386 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 01:18:26.912452 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 01:18:26.912518 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 01:18:26.912586 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 01:18:26.912658 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 01:18:26.912742 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 01:18:26.912813 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 01:18:26.912879 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 01:18:26.912947 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 01:18:26.913069 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 01:18:26.913143 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 01:18:26.913212 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 01:18:26.913291 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 01:18:26.913356 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 01:18:26.913422 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 01:18:26.913485 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 01:18:26.913550 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 01:18:26.913632 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 01:18:26.913706 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 01:18:26.913770 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 01:18:26.913843 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 01:18:26.913913 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 01:18:26.913979 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 01:18:26.914055 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 01:18:26.914123 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 01:18:26.914189 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 01:18:26.914268 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 01:18:26.914343 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 01:18:26.914428 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 01:18:26.914498 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 01:18:26.914564 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 01:18:26.914633 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 01:18:26.914705 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 01:18:26.914771 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 01:18:26.914841 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 01:18:26.914907 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 01:18:26.914972 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 01:18:26.915049 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 01:18:26.915122 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 01:18:26.915196 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 01:18:26.915300 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 01:18:26.915385 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 01:18:26.915453 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 01:18:26.915518 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 01:18:26.915583 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 01:18:26.915648 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 01:18:26.915720 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 01:18:26.915790 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 01:18:26.915856 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 01:18:26.915941 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 01:18:26.916018 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 01:18:26.916094 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 01:18:26.916162 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 01:18:26.916238 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 01:18:26.916339 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 01:18:26.916411 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 01:18:26.916479 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 01:18:26.916563 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 01:18:26.916635 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 01:18:26.916701 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 01:18:26.916770 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 01:18:26.916871 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 01:18:26.916953 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 01:18:26.917121 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 01:18:26.917195 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 01:18:26.917287 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 01:18:26.917360 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 01:18:26.917425 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 01:18:26.917498 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 01:18:26.917565 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 01:18:26.917637 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 01:18:26.917700 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 01:18:26.917764 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 01:18:26.917829 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 01:18:26.917901 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 01:18:26.917967 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 01:18:26.918044 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 01:18:26.918117 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 01:18:26.918200 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 01:18:26.918279 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 01:18:26.918349 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 01:18:26.918419 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 01:18:26.918542 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 01:18:26.918613 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 01:18:26.918678 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 01:18:26.918750 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 01:18:26.918817 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 01:18:26.918946 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 01:18:26.919034 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 01:18:26.919114 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 01:18:26.919174 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 01:18:26.919232 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 01:18:26.919350 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 01:18:26.919424 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 01:18:26.919484 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 01:18:26.919729 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 01:18:26.919792 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 01:18:26.919854 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 01:18:26.919922 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 01:18:26.919991 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 01:18:26.922217 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 01:18:26.922343 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 01:18:26.922412 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 01:18:26.922472 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 01:18:26.922539 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 01:18:26.922599 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 01:18:26.922672 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 01:18:26.922742 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 01:18:26.922802 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 01:18:26.922867 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 01:18:26.922938 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 01:18:26.922999 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 01:18:26.923078 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 01:18:26.923153 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 01:18:26.923218 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 01:18:26.923343 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 01:18:26.923419 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 01:18:26.923487 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 01:18:26.923548 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 01:18:26.923557 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 01:18:26.923565 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 01:18:26.923573 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 01:18:26.923581 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 01:18:26.923589 kernel: iommu: Default domain type: Translated Apr 30 01:18:26.923596 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 01:18:26.923606 kernel: efivars: Registered efivars operations Apr 30 01:18:26.923614 kernel: vgaarb: loaded Apr 30 01:18:26.923622 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 01:18:26.923630 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 01:18:26.923638 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 01:18:26.923646 kernel: pnp: PnP ACPI init Apr 30 01:18:26.923723 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 01:18:26.923734 kernel: pnp: PnP ACPI: found 1 devices Apr 30 01:18:26.923744 kernel: NET: Registered PF_INET protocol family Apr 30 01:18:26.923752 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 01:18:26.923759 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 01:18:26.923767 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 01:18:26.923775 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 01:18:26.923783 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 01:18:26.923791 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 01:18:26.923798 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 01:18:26.923807 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 01:18:26.923816 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 01:18:26.923891 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 01:18:26.923902 kernel: PCI: CLS 0 bytes, default 64 Apr 30 01:18:26.923910 kernel: kvm [1]: HYP mode not available Apr 30 01:18:26.923918 kernel: Initialise system trusted keyrings Apr 30 01:18:26.923925 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 01:18:26.923933 kernel: Key type asymmetric registered Apr 30 01:18:26.923940 kernel: Asymmetric key parser 'x509' registered Apr 30 01:18:26.923948 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 01:18:26.923959 kernel: io scheduler mq-deadline registered Apr 30 01:18:26.923966 kernel: io scheduler kyber registered Apr 30 01:18:26.923974 kernel: io scheduler bfq registered Apr 30 01:18:26.923982 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 01:18:26.925600 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 01:18:26.925689 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 01:18:26.925757 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.925828 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 01:18:26.925903 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 01:18:26.925970 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.926659 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 01:18:26.926746 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 01:18:26.926816 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.926886 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 01:18:26.926962 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 01:18:26.927658 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.927748 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 01:18:26.927816 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 01:18:26.927882 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.927952 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 01:18:26.928552 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 01:18:26.928647 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.928717 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 01:18:26.929121 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 01:18:26.929200 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.929287 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 01:18:26.929369 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 01:18:26.929436 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.929447 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 01:18:26.929520 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 01:18:26.929593 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 01:18:26.929659 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 01:18:26.929672 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 01:18:26.929680 kernel: ACPI: button: Power Button [PWRB] Apr 30 01:18:26.929688 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 01:18:26.929760 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 01:18:26.929835 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 01:18:26.929847 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 01:18:26.929855 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 01:18:26.929923 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 01:18:26.929936 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 01:18:26.929943 kernel: thunder_xcv, ver 1.0 Apr 30 01:18:26.929951 kernel: thunder_bgx, ver 1.0 Apr 30 01:18:26.929959 kernel: nicpf, ver 1.0 Apr 30 01:18:26.929966 kernel: nicvf, ver 1.0 Apr 30 01:18:26.931164 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 01:18:26.931248 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T01:18:26 UTC (1745975906) Apr 30 01:18:26.931273 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 01:18:26.931289 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 01:18:26.931298 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 01:18:26.931306 kernel: watchdog: Hard watchdog permanently disabled Apr 30 01:18:26.931314 kernel: NET: Registered PF_INET6 protocol family Apr 30 01:18:26.931321 kernel: Segment Routing with IPv6 Apr 30 01:18:26.931329 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 01:18:26.931337 kernel: NET: Registered PF_PACKET protocol family Apr 30 01:18:26.931345 kernel: Key type dns_resolver registered Apr 30 01:18:26.931353 kernel: registered taskstats version 1 Apr 30 01:18:26.931361 kernel: Loading compiled-in X.509 certificates Apr 30 01:18:26.931371 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 01:18:26.931379 kernel: Key type .fscrypt registered Apr 30 01:18:26.931386 kernel: Key type fscrypt-provisioning registered Apr 30 01:18:26.931394 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 01:18:26.931402 kernel: ima: Allocated hash algorithm: sha1 Apr 30 01:18:26.931410 kernel: ima: No architecture policies found Apr 30 01:18:26.931417 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 01:18:26.931425 kernel: clk: Disabling unused clocks Apr 30 01:18:26.931435 kernel: Freeing unused kernel memory: 39424K Apr 30 01:18:26.931443 kernel: Run /init as init process Apr 30 01:18:26.931450 kernel: with arguments: Apr 30 01:18:26.931458 kernel: /init Apr 30 01:18:26.931465 kernel: with environment: Apr 30 01:18:26.931473 kernel: HOME=/ Apr 30 01:18:26.931480 kernel: TERM=linux Apr 30 01:18:26.931487 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 01:18:26.931497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 01:18:26.931509 systemd[1]: Detected virtualization kvm. Apr 30 01:18:26.931517 systemd[1]: Detected architecture arm64. Apr 30 01:18:26.931525 systemd[1]: Running in initrd. Apr 30 01:18:26.931533 systemd[1]: No hostname configured, using default hostname. Apr 30 01:18:26.931541 systemd[1]: Hostname set to . Apr 30 01:18:26.931549 systemd[1]: Initializing machine ID from VM UUID. Apr 30 01:18:26.931557 systemd[1]: Queued start job for default target initrd.target. Apr 30 01:18:26.931567 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:18:26.931576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:18:26.931585 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 01:18:26.931593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 01:18:26.931601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 01:18:26.931610 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 01:18:26.931619 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 01:18:26.931630 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 01:18:26.931638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:18:26.931646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:18:26.931655 systemd[1]: Reached target paths.target - Path Units. Apr 30 01:18:26.931664 systemd[1]: Reached target slices.target - Slice Units. Apr 30 01:18:26.931672 systemd[1]: Reached target swap.target - Swaps. Apr 30 01:18:26.931681 systemd[1]: Reached target timers.target - Timer Units. Apr 30 01:18:26.931689 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 01:18:26.931697 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 01:18:26.931707 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 01:18:26.931715 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 01:18:26.931724 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:18:26.931732 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 01:18:26.931740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:18:26.931748 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 01:18:26.931757 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 01:18:26.931765 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 01:18:26.931775 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 01:18:26.931784 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 01:18:26.931792 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 01:18:26.931844 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 01:18:26.931855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:26.931864 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 01:18:26.931872 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:18:26.931880 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 01:18:26.931919 systemd-journald[237]: Collecting audit messages is disabled. Apr 30 01:18:26.931942 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 01:18:26.931951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:26.931960 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 01:18:26.931968 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:18:26.931977 kernel: Bridge firewalling registered Apr 30 01:18:26.931986 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 01:18:26.931995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 01:18:26.932004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 01:18:26.933080 systemd-journald[237]: Journal started Apr 30 01:18:26.933107 systemd-journald[237]: Runtime Journal (/run/log/journal/9469050cbcb046528fe838deeaa89397) is 8.0M, max 76.6M, 68.6M free. Apr 30 01:18:26.893589 systemd-modules-load[238]: Inserted module 'overlay' Apr 30 01:18:26.939984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 01:18:26.919054 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 30 01:18:26.943135 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 01:18:26.942490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:18:26.950235 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 01:18:26.953210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 01:18:26.954220 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:18:26.958084 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:18:26.970511 dracut-cmdline[264]: dracut-dracut-053 Apr 30 01:18:26.973416 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 01:18:26.977462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:18:26.987781 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 01:18:27.012903 systemd-resolved[286]: Positive Trust Anchors: Apr 30 01:18:27.012925 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 01:18:27.012958 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 01:18:27.019603 systemd-resolved[286]: Defaulting to hostname 'linux'. Apr 30 01:18:27.020727 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 01:18:27.021715 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:18:27.075082 kernel: SCSI subsystem initialized Apr 30 01:18:27.080050 kernel: Loading iSCSI transport class v2.0-870. Apr 30 01:18:27.088160 kernel: iscsi: registered transport (tcp) Apr 30 01:18:27.106061 kernel: iscsi: registered transport (qla4xxx) Apr 30 01:18:27.106128 kernel: QLogic iSCSI HBA Driver Apr 30 01:18:27.156120 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 01:18:27.161202 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 01:18:27.183494 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 01:18:27.183633 kernel: device-mapper: uevent: version 1.0.3 Apr 30 01:18:27.183668 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 01:18:27.236075 kernel: raid6: neonx8 gen() 15533 MB/s Apr 30 01:18:27.253120 kernel: raid6: neonx4 gen() 14409 MB/s Apr 30 01:18:27.270049 kernel: raid6: neonx2 gen() 13166 MB/s Apr 30 01:18:27.287069 kernel: raid6: neonx1 gen() 10444 MB/s Apr 30 01:18:27.304117 kernel: raid6: int64x8 gen() 6906 MB/s Apr 30 01:18:27.321081 kernel: raid6: int64x4 gen() 7315 MB/s Apr 30 01:18:27.338086 kernel: raid6: int64x2 gen() 6101 MB/s Apr 30 01:18:27.355142 kernel: raid6: int64x1 gen() 5025 MB/s Apr 30 01:18:27.355277 kernel: raid6: using algorithm neonx8 gen() 15533 MB/s Apr 30 01:18:27.372081 kernel: raid6: .... xor() 11849 MB/s, rmw enabled Apr 30 01:18:27.372163 kernel: raid6: using neon recovery algorithm Apr 30 01:18:27.377342 kernel: xor: measuring software checksum speed Apr 30 01:18:27.377432 kernel: 8regs : 19802 MB/sec Apr 30 01:18:27.377461 kernel: 32regs : 17658 MB/sec Apr 30 01:18:27.377486 kernel: arm64_neon : 26972 MB/sec Apr 30 01:18:27.378059 kernel: xor: using function: arm64_neon (26972 MB/sec) Apr 30 01:18:27.429093 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 01:18:27.446331 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 01:18:27.453181 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:18:27.477547 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 30 01:18:27.481021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:18:27.489187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 01:18:27.507146 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Apr 30 01:18:27.544640 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 01:18:27.555242 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 01:18:27.605947 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:18:27.616673 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 01:18:27.638602 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 01:18:27.642951 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 01:18:27.645380 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:18:27.646089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 01:18:27.654315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 01:18:27.670564 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 01:18:27.702216 kernel: scsi host0: Virtio SCSI HBA Apr 30 01:18:27.710208 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 01:18:27.710296 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 01:18:27.744995 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 01:18:27.745766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:18:27.748219 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 01:18:27.755340 kernel: ACPI: bus type USB registered Apr 30 01:18:27.755358 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 01:18:27.755478 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 01:18:27.755496 kernel: usbcore: registered new interface driver usbfs Apr 30 01:18:27.755506 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 01:18:27.748128 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:18:27.757081 kernel: usbcore: registered new interface driver hub Apr 30 01:18:27.757105 kernel: usbcore: registered new device driver usb Apr 30 01:18:27.748947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 01:18:27.749339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:27.751670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:27.759394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:27.786074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:27.792231 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 01:18:27.795053 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 01:18:27.808192 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 01:18:27.808337 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 01:18:27.808458 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 01:18:27.808542 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 01:18:27.808629 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 01:18:27.808712 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 01:18:27.808792 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 01:18:27.808876 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 01:18:27.808954 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 01:18:27.809072 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 01:18:27.809162 kernel: hub 1-0:1.0: USB hub found Apr 30 01:18:27.809279 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 01:18:27.809291 kernel: GPT:17805311 != 80003071 Apr 30 01:18:27.809300 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 01:18:27.809310 kernel: GPT:17805311 != 80003071 Apr 30 01:18:27.809319 kernel: hub 1-0:1.0: 4 ports detected Apr 30 01:18:27.809408 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 01:18:27.809421 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:27.809431 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 01:18:27.809525 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 01:18:27.809615 kernel: hub 2-0:1.0: USB hub found Apr 30 01:18:27.809709 kernel: hub 2-0:1.0: 4 ports detected Apr 30 01:18:27.830122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:18:27.861980 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (503) Apr 30 01:18:27.861305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 01:18:27.864859 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (511) Apr 30 01:18:27.873043 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 01:18:27.887839 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 01:18:27.889162 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 01:18:27.895685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 01:18:27.902220 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 01:18:27.908382 disk-uuid[577]: Primary Header is updated. Apr 30 01:18:27.908382 disk-uuid[577]: Secondary Entries is updated. Apr 30 01:18:27.908382 disk-uuid[577]: Secondary Header is updated. Apr 30 01:18:27.913064 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:27.919073 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:27.926061 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:28.047043 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 01:18:28.290081 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 01:18:28.423722 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 01:18:28.423782 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 01:18:28.426058 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 01:18:28.480684 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 01:18:28.481326 kernel: usbcore: registered new interface driver usbhid Apr 30 01:18:28.481433 kernel: usbhid: USB HID core driver Apr 30 01:18:28.925776 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 01:18:28.928651 disk-uuid[578]: The operation has completed successfully. Apr 30 01:18:29.001561 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 01:18:29.001665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 01:18:29.020375 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 01:18:29.023883 sh[595]: Success Apr 30 01:18:29.042709 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 01:18:29.109234 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 01:18:29.118179 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 01:18:29.118978 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 01:18:29.139705 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 01:18:29.139788 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 01:18:29.139806 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 01:18:29.140439 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 01:18:29.141105 kernel: BTRFS info (device dm-0): using free space tree Apr 30 01:18:29.147037 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 01:18:29.149597 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 01:18:29.151208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 01:18:29.161298 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 01:18:29.165251 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 01:18:29.175968 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:29.176080 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 01:18:29.176108 kernel: BTRFS info (device sda6): using free space tree Apr 30 01:18:29.181713 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 01:18:29.181776 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 01:18:29.193123 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 01:18:29.194233 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:29.200930 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 01:18:29.213636 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 01:18:29.307828 ignition[681]: Ignition 2.19.0 Apr 30 01:18:29.307844 ignition[681]: Stage: fetch-offline Apr 30 01:18:29.307880 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:29.309927 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 01:18:29.307888 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:29.308078 ignition[681]: parsed url from cmdline: "" Apr 30 01:18:29.308082 ignition[681]: no config URL provided Apr 30 01:18:29.308086 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 01:18:29.308093 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 30 01:18:29.308099 ignition[681]: failed to fetch config: resource requires networking Apr 30 01:18:29.308305 ignition[681]: Ignition finished successfully Apr 30 01:18:29.317318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 01:18:29.318062 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 01:18:29.339505 systemd-networkd[784]: lo: Link UP Apr 30 01:18:29.339517 systemd-networkd[784]: lo: Gained carrier Apr 30 01:18:29.341145 systemd-networkd[784]: Enumeration completed Apr 30 01:18:29.341904 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:29.341907 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:18:29.342360 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 01:18:29.343270 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:29.343274 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:18:29.343431 systemd[1]: Reached target network.target - Network. Apr 30 01:18:29.343814 systemd-networkd[784]: eth0: Link UP Apr 30 01:18:29.343818 systemd-networkd[784]: eth0: Gained carrier Apr 30 01:18:29.343827 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:29.348244 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 01:18:29.348341 systemd-networkd[784]: eth1: Link UP Apr 30 01:18:29.348344 systemd-networkd[784]: eth1: Gained carrier Apr 30 01:18:29.348352 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:29.366847 ignition[786]: Ignition 2.19.0 Apr 30 01:18:29.366867 ignition[786]: Stage: fetch Apr 30 01:18:29.367149 ignition[786]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:29.367163 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:29.367311 ignition[786]: parsed url from cmdline: "" Apr 30 01:18:29.367316 ignition[786]: no config URL provided Apr 30 01:18:29.367323 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 01:18:29.367335 ignition[786]: no config at "/usr/lib/ignition/user.ign" Apr 30 01:18:29.367360 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 01:18:29.367837 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 01:18:29.385134 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 01:18:29.407348 systemd-networkd[784]: eth0: DHCPv4 address 78.47.197.16/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 01:18:29.567994 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 01:18:29.574773 ignition[786]: GET result: OK Apr 30 01:18:29.574931 ignition[786]: parsing config with SHA512: 2f2cb31943c4cb529b69a26545f218339fd28afd9d4762c50a047913227a699ce91ced95d4dae8d18454c9297f4ebb0a1d008c1dddf97cf97b9703eb93b7ef0b Apr 30 01:18:29.579972 unknown[786]: fetched base config from "system" Apr 30 01:18:29.579994 unknown[786]: fetched base config from "system" Apr 30 01:18:29.580471 ignition[786]: fetch: fetch complete Apr 30 01:18:29.579999 unknown[786]: fetched user config from "hetzner" Apr 30 01:18:29.580477 ignition[786]: fetch: fetch passed Apr 30 01:18:29.582572 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 01:18:29.580537 ignition[786]: Ignition finished successfully Apr 30 01:18:29.589314 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 01:18:29.604995 ignition[794]: Ignition 2.19.0 Apr 30 01:18:29.605006 ignition[794]: Stage: kargs Apr 30 01:18:29.605255 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:29.605268 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:29.606466 ignition[794]: kargs: kargs passed Apr 30 01:18:29.606534 ignition[794]: Ignition finished successfully Apr 30 01:18:29.609452 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 01:18:29.619305 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 01:18:29.631463 ignition[801]: Ignition 2.19.0 Apr 30 01:18:29.631472 ignition[801]: Stage: disks Apr 30 01:18:29.631659 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:29.631669 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:29.632706 ignition[801]: disks: disks passed Apr 30 01:18:29.632767 ignition[801]: Ignition finished successfully Apr 30 01:18:29.635788 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 01:18:29.637621 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 01:18:29.638284 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 01:18:29.639534 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 01:18:29.640895 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 01:18:29.642352 systemd[1]: Reached target basic.target - Basic System. Apr 30 01:18:29.649547 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 01:18:29.665030 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 01:18:29.669034 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 01:18:29.674133 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 01:18:29.736030 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 01:18:29.736618 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 01:18:29.738498 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 01:18:29.746220 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 01:18:29.750453 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 01:18:29.754290 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 01:18:29.755223 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 01:18:29.755320 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 01:18:29.763077 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (817) Apr 30 01:18:29.763565 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 01:18:29.766141 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:29.766166 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 01:18:29.767622 kernel: BTRFS info (device sda6): using free space tree Apr 30 01:18:29.768965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 01:18:29.775040 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 01:18:29.775096 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 01:18:29.783903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 01:18:29.827402 coreos-metadata[819]: Apr 30 01:18:29.827 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 01:18:29.829173 coreos-metadata[819]: Apr 30 01:18:29.829 INFO Fetch successful Apr 30 01:18:29.832038 coreos-metadata[819]: Apr 30 01:18:29.830 INFO wrote hostname ci-4081-3-3-5-cafd7e9e76 to /sysroot/etc/hostname Apr 30 01:18:29.833808 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 01:18:29.834870 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 01:18:29.841725 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Apr 30 01:18:29.848422 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 01:18:29.853167 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 01:18:29.952054 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 01:18:29.956135 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 01:18:29.959224 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 01:18:29.967049 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:29.991416 ignition[934]: INFO : Ignition 2.19.0 Apr 30 01:18:29.992540 ignition[934]: INFO : Stage: mount Apr 30 01:18:29.992540 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:29.992540 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:29.997069 ignition[934]: INFO : mount: mount passed Apr 30 01:18:29.997069 ignition[934]: INFO : Ignition finished successfully Apr 30 01:18:29.995815 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 01:18:29.998135 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 01:18:30.011311 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 01:18:30.137997 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 01:18:30.147359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 01:18:30.156095 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (945) Apr 30 01:18:30.158045 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 01:18:30.158088 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 01:18:30.158107 kernel: BTRFS info (device sda6): using free space tree Apr 30 01:18:30.161045 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 01:18:30.161103 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 01:18:30.163874 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 01:18:30.186054 ignition[962]: INFO : Ignition 2.19.0 Apr 30 01:18:30.186054 ignition[962]: INFO : Stage: files Apr 30 01:18:30.186054 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:30.186054 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:30.190294 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 30 01:18:30.190294 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 01:18:30.190294 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 01:18:30.193980 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 01:18:30.195155 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 01:18:30.195155 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 01:18:30.194431 unknown[962]: wrote ssh authorized keys file for user: core Apr 30 01:18:30.197672 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 01:18:30.197672 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 01:18:30.197672 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 01:18:30.197672 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 01:18:30.299862 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 01:18:30.525824 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 01:18:30.525824 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 01:18:30.528451 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 01:18:31.099352 systemd-networkd[784]: eth1: Gained IPv6LL Apr 30 01:18:31.227756 systemd-networkd[784]: eth0: Gained IPv6LL Apr 30 01:18:31.233860 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 01:18:31.454738 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 01:18:31.454738 ignition[962]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 01:18:31.457925 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 01:18:31.457925 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 01:18:31.457925 ignition[962]: INFO : files: files passed Apr 30 01:18:31.457925 ignition[962]: INFO : Ignition finished successfully Apr 30 01:18:31.459873 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 01:18:31.465471 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 01:18:31.471774 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 01:18:31.485504 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 01:18:31.485735 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 01:18:31.498211 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:18:31.498211 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:18:31.501102 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 01:18:31.503477 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 01:18:31.504462 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 01:18:31.510273 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 01:18:31.544130 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 01:18:31.545035 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 01:18:31.546252 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 01:18:31.547633 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 01:18:31.549450 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 01:18:31.555299 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 01:18:31.568354 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 01:18:31.575898 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 01:18:31.593096 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:18:31.594582 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:18:31.596007 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 01:18:31.597076 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 01:18:31.597227 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 01:18:31.599340 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 01:18:31.600937 systemd[1]: Stopped target basic.target - Basic System. Apr 30 01:18:31.601631 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 01:18:31.602690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 01:18:31.604914 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 01:18:31.606666 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 01:18:31.607324 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 01:18:31.608033 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 01:18:31.609895 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 01:18:31.611067 systemd[1]: Stopped target swap.target - Swaps. Apr 30 01:18:31.612049 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 01:18:31.612227 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 01:18:31.613521 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:18:31.614214 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:18:31.614831 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 01:18:31.617253 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:18:31.622271 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 01:18:31.622410 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 01:18:31.625563 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 01:18:31.625759 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 01:18:31.628008 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 01:18:31.628227 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 01:18:31.630132 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 01:18:31.630304 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 01:18:31.643950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 01:18:31.644926 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 01:18:31.645287 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:18:31.649422 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 01:18:31.650345 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 01:18:31.651227 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:18:31.655539 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 01:18:31.655816 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 01:18:31.668042 ignition[1014]: INFO : Ignition 2.19.0 Apr 30 01:18:31.668042 ignition[1014]: INFO : Stage: umount Apr 30 01:18:31.668042 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 01:18:31.668042 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 01:18:31.672674 ignition[1014]: INFO : umount: umount passed Apr 30 01:18:31.672674 ignition[1014]: INFO : Ignition finished successfully Apr 30 01:18:31.673404 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 01:18:31.673508 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 01:18:31.674605 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 01:18:31.674688 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 01:18:31.676004 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 01:18:31.676133 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 01:18:31.677773 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 01:18:31.677829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 01:18:31.678708 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 01:18:31.678755 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 01:18:31.679444 systemd[1]: Stopped target network.target - Network. Apr 30 01:18:31.680369 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 01:18:31.680428 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 01:18:31.684306 systemd[1]: Stopped target paths.target - Path Units. Apr 30 01:18:31.685499 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 01:18:31.691051 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:18:31.696083 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 01:18:31.696975 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 01:18:31.698421 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 01:18:31.698469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 01:18:31.699302 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 01:18:31.699339 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 01:18:31.700480 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 01:18:31.700532 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 01:18:31.701405 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 01:18:31.701451 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 01:18:31.702596 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 01:18:31.703689 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 01:18:31.705554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 01:18:31.706083 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 01:18:31.706215 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 01:18:31.707661 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 01:18:31.707744 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 01:18:31.709111 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 30 01:18:31.713130 systemd-networkd[784]: eth1: DHCPv6 lease lost Apr 30 01:18:31.715006 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 01:18:31.715226 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 01:18:31.719103 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 01:18:31.719270 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 01:18:31.721134 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 01:18:31.721228 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:18:31.728208 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 01:18:31.729431 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 01:18:31.729540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 01:18:31.731514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 01:18:31.731573 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:18:31.732432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 01:18:31.732491 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 01:18:31.733306 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 01:18:31.733357 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:18:31.734967 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:18:31.749946 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 01:18:31.750731 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 01:18:31.753525 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 01:18:31.754270 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:18:31.756538 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 01:18:31.756616 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 01:18:31.758166 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 01:18:31.758212 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:18:31.760407 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 01:18:31.760517 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 01:18:31.762887 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 01:18:31.762939 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 01:18:31.764436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 01:18:31.764492 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 01:18:31.771380 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 01:18:31.773702 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 01:18:31.773808 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:18:31.776418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 01:18:31.776479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:31.778635 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 01:18:31.778735 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 01:18:31.780709 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 01:18:31.788231 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 01:18:31.798266 systemd[1]: Switching root. Apr 30 01:18:31.836494 systemd-journald[237]: Journal stopped Apr 30 01:18:32.796691 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 30 01:18:32.796780 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 01:18:32.796794 kernel: SELinux: policy capability open_perms=1 Apr 30 01:18:32.796804 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 01:18:32.796814 kernel: SELinux: policy capability always_check_network=0 Apr 30 01:18:32.796824 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 01:18:32.796834 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 01:18:32.796845 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 01:18:32.796855 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 01:18:32.796877 kernel: audit: type=1403 audit(1745975912.046:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 01:18:32.796893 systemd[1]: Successfully loaded SELinux policy in 36.016ms. Apr 30 01:18:32.796923 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.186ms. Apr 30 01:18:32.796935 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 01:18:32.796947 systemd[1]: Detected virtualization kvm. Apr 30 01:18:32.796958 systemd[1]: Detected architecture arm64. Apr 30 01:18:32.796969 systemd[1]: Detected first boot. Apr 30 01:18:32.796980 systemd[1]: Hostname set to . Apr 30 01:18:32.796993 systemd[1]: Initializing machine ID from VM UUID. Apr 30 01:18:32.797004 zram_generator::config[1074]: No configuration found. Apr 30 01:18:32.797038 systemd[1]: Populated /etc with preset unit settings. Apr 30 01:18:32.797052 systemd[1]: Queued start job for default target multi-user.target. Apr 30 01:18:32.797064 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 01:18:32.797076 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 01:18:32.797087 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 01:18:32.797098 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 01:18:32.797111 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 01:18:32.797122 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 01:18:32.797150 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 01:18:32.797164 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 01:18:32.797175 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 01:18:32.797186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 01:18:32.797199 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 01:18:32.797211 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 01:18:32.797222 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 01:18:32.797236 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 01:18:32.797247 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 01:18:32.797258 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 01:18:32.797270 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 01:18:32.797281 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 01:18:32.797292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 01:18:32.797308 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 01:18:32.797321 systemd[1]: Reached target slices.target - Slice Units. Apr 30 01:18:32.797332 systemd[1]: Reached target swap.target - Swaps. Apr 30 01:18:32.797343 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 01:18:32.797354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 01:18:32.797365 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 01:18:32.797376 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 01:18:32.797387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 01:18:32.797399 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 01:18:32.797410 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 01:18:32.797423 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 01:18:32.797434 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 01:18:32.797446 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 01:18:32.797457 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 01:18:32.797468 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 01:18:32.797479 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 01:18:32.797495 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 01:18:32.797509 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 01:18:32.797521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:32.797533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 01:18:32.797544 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 01:18:32.797560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:18:32.797574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 01:18:32.797587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 01:18:32.797600 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 01:18:32.797612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 01:18:32.797624 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 01:18:32.797636 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 01:18:32.797648 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 01:18:32.797659 kernel: fuse: init (API version 7.39) Apr 30 01:18:32.797670 kernel: ACPI: bus type drm_connector registered Apr 30 01:18:32.797681 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 01:18:32.797699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 01:18:32.797714 kernel: loop: module loaded Apr 30 01:18:32.797725 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 01:18:32.797737 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 01:18:32.797780 systemd-journald[1158]: Collecting audit messages is disabled. Apr 30 01:18:32.797809 systemd-journald[1158]: Journal started Apr 30 01:18:32.797835 systemd-journald[1158]: Runtime Journal (/run/log/journal/9469050cbcb046528fe838deeaa89397) is 8.0M, max 76.6M, 68.6M free. Apr 30 01:18:32.803066 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 01:18:32.818180 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 01:18:32.819681 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 01:18:32.820482 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 01:18:32.824125 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 01:18:32.826315 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 01:18:32.827085 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 01:18:32.827909 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 01:18:32.829368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 01:18:32.832439 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 01:18:32.832614 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 01:18:32.833723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:18:32.833887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:18:32.835121 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 01:18:32.835305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 01:18:32.836294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 01:18:32.836448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 01:18:32.837347 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 01:18:32.837502 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 01:18:32.838599 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 01:18:32.840367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 01:18:32.841393 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 01:18:32.846687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 01:18:32.849583 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 01:18:32.851647 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 01:18:32.863915 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 01:18:32.872157 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 01:18:32.877260 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 01:18:32.879143 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 01:18:32.889275 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 01:18:32.893443 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 01:18:32.897240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 01:18:32.907252 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 01:18:32.909762 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 01:18:32.912209 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 01:18:32.922221 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 01:18:32.925209 systemd-journald[1158]: Time spent on flushing to /var/log/journal/9469050cbcb046528fe838deeaa89397 is 50.313ms for 1113 entries. Apr 30 01:18:32.925209 systemd-journald[1158]: System Journal (/var/log/journal/9469050cbcb046528fe838deeaa89397) is 8.0M, max 584.8M, 576.8M free. Apr 30 01:18:32.984069 systemd-journald[1158]: Received client request to flush runtime journal. Apr 30 01:18:32.928597 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 01:18:32.933168 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 01:18:32.937856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 01:18:32.941905 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 01:18:32.945479 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 01:18:32.947398 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 01:18:32.974500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 01:18:32.984082 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 01:18:32.990491 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 01:18:32.992207 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Apr 30 01:18:32.992220 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Apr 30 01:18:32.997272 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 01:18:33.008336 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 01:18:33.040946 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 01:18:33.049339 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 01:18:33.064524 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 30 01:18:33.064836 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 30 01:18:33.071530 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 01:18:33.488350 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 01:18:33.496224 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 01:18:33.535110 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Apr 30 01:18:33.562903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 01:18:33.576205 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 01:18:33.589216 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 01:18:33.629815 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 30 01:18:33.678604 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 01:18:33.764675 systemd-networkd[1245]: lo: Link UP Apr 30 01:18:33.765049 systemd-networkd[1245]: lo: Gained carrier Apr 30 01:18:33.767899 systemd-networkd[1245]: Enumeration completed Apr 30 01:18:33.769960 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 01:18:33.768189 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 01:18:33.771168 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.771175 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:18:33.772093 systemd-networkd[1245]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.772166 systemd-networkd[1245]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 01:18:33.772871 systemd-networkd[1245]: eth0: Link UP Apr 30 01:18:33.773107 systemd-networkd[1245]: eth0: Gained carrier Apr 30 01:18:33.773143 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.777322 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 01:18:33.778919 systemd-networkd[1245]: eth1: Link UP Apr 30 01:18:33.778927 systemd-networkd[1245]: eth1: Gained carrier Apr 30 01:18:33.778941 systemd-networkd[1245]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.787846 systemd-networkd[1245]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 01:18:33.800087 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1254) Apr 30 01:18:33.811094 systemd-networkd[1245]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 01:18:33.839073 systemd-networkd[1245]: eth0: DHCPv4 address 78.47.197.16/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 01:18:33.855807 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Apr 30 01:18:33.856194 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 01:18:33.856358 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:33.871321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:18:33.875689 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 01:18:33.886232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 01:18:33.887637 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 01:18:33.890188 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 01:18:33.890679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:18:33.891546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:18:33.893786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 01:18:33.893959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 01:18:33.896689 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 01:18:33.906209 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 01:18:33.907284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 01:18:33.912526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 01:18:33.918824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 01:18:33.931267 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 01:18:33.931309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:33.933037 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 01:18:33.933122 kernel: [drm] features: -context_init Apr 30 01:18:33.933138 kernel: [drm] number of scanouts: 1 Apr 30 01:18:33.934027 kernel: [drm] number of cap sets: 0 Apr 30 01:18:33.935037 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 01:18:33.940278 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 01:18:33.944033 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 01:18:33.953536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 01:18:33.953792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:33.963771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 01:18:34.026812 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 01:18:34.097953 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 01:18:34.105256 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 01:18:34.119079 lvm[1311]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 01:18:34.144845 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 01:18:34.147263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 01:18:34.152228 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 01:18:34.171088 lvm[1314]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 01:18:34.202643 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 01:18:34.203944 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 01:18:34.204830 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 01:18:34.204975 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 01:18:34.205620 systemd[1]: Reached target machines.target - Containers. Apr 30 01:18:34.207590 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 01:18:34.213185 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 01:18:34.216233 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 01:18:34.221251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:34.223239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 01:18:34.225333 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 01:18:34.229488 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 01:18:34.233303 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 01:18:34.256071 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 01:18:34.266070 kernel: loop0: detected capacity change from 0 to 8 Apr 30 01:18:34.272649 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 01:18:34.275053 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 01:18:34.276118 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 01:18:34.304148 kernel: loop1: detected capacity change from 0 to 114328 Apr 30 01:18:34.334278 kernel: loop2: detected capacity change from 0 to 114432 Apr 30 01:18:34.372079 kernel: loop3: detected capacity change from 0 to 194096 Apr 30 01:18:34.409055 kernel: loop4: detected capacity change from 0 to 8 Apr 30 01:18:34.411142 kernel: loop5: detected capacity change from 0 to 114328 Apr 30 01:18:34.430055 kernel: loop6: detected capacity change from 0 to 114432 Apr 30 01:18:34.445157 kernel: loop7: detected capacity change from 0 to 194096 Apr 30 01:18:34.463276 (sd-merge)[1337]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 01:18:34.464202 (sd-merge)[1337]: Merged extensions into '/usr'. Apr 30 01:18:34.472203 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 01:18:34.472222 systemd[1]: Reloading... Apr 30 01:18:34.557055 zram_generator::config[1365]: No configuration found. Apr 30 01:18:34.650505 ldconfig[1318]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 01:18:34.691072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:18:34.749824 systemd[1]: Reloading finished in 277 ms. Apr 30 01:18:34.769853 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 01:18:34.770975 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 01:18:34.784289 systemd[1]: Starting ensure-sysext.service... Apr 30 01:18:34.796359 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 01:18:34.804604 systemd[1]: Reloading requested from client PID 1409 ('systemctl') (unit ensure-sysext.service)... Apr 30 01:18:34.804624 systemd[1]: Reloading... Apr 30 01:18:34.815546 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 01:18:34.816305 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 01:18:34.817432 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 01:18:34.817863 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Apr 30 01:18:34.818060 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Apr 30 01:18:34.821843 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 01:18:34.821852 systemd-tmpfiles[1410]: Skipping /boot Apr 30 01:18:34.831842 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 01:18:34.831983 systemd-tmpfiles[1410]: Skipping /boot Apr 30 01:18:34.885044 zram_generator::config[1440]: No configuration found. Apr 30 01:18:34.998590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:18:35.060662 systemd[1]: Reloading finished in 255 ms. Apr 30 01:18:35.078293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 01:18:35.090218 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 01:18:35.099239 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 01:18:35.104425 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 01:18:35.111224 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 01:18:35.116749 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 01:18:35.128700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:35.133121 systemd-networkd[1245]: eth1: Gained IPv6LL Apr 30 01:18:35.133783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 01:18:35.149944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 01:18:35.161524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 01:18:35.164210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:35.165113 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 01:18:35.174925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:35.175185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:35.180445 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 01:18:35.184658 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 01:18:35.196683 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 01:18:35.196862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 01:18:35.199799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 01:18:35.199977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 01:18:35.202642 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 01:18:35.202851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 01:18:35.207163 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 01:18:35.225542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 01:18:35.227204 augenrules[1521]: No rules Apr 30 01:18:35.236313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 01:18:35.236998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 01:18:35.237061 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 01:18:35.237139 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 01:18:35.243309 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 01:18:35.243893 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 01:18:35.244622 systemd[1]: Finished ensure-sysext.service. Apr 30 01:18:35.247415 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 01:18:35.248418 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 01:18:35.252398 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 01:18:35.268421 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 01:18:35.278208 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 01:18:35.291940 systemd-resolved[1488]: Positive Trust Anchors: Apr 30 01:18:35.292429 systemd-resolved[1488]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 01:18:35.292528 systemd-resolved[1488]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 01:18:35.297301 systemd-resolved[1488]: Using system hostname 'ci-4081-3-3-5-cafd7e9e76'. Apr 30 01:18:35.301471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 01:18:35.304401 systemd[1]: Reached target network.target - Network. Apr 30 01:18:35.305190 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 01:18:35.306051 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 01:18:35.323250 systemd-networkd[1245]: eth0: Gained IPv6LL Apr 30 01:18:35.332263 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 01:18:35.334195 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 01:18:35.335184 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 01:18:35.335981 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 01:18:35.336831 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 01:18:35.337789 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 01:18:35.337833 systemd[1]: Reached target paths.target - Path Units. Apr 30 01:18:35.338621 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 01:18:35.339559 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 01:18:35.340473 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 01:18:35.341298 systemd[1]: Reached target timers.target - Timer Units. Apr 30 01:18:35.343868 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 01:18:35.346745 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 01:18:35.350933 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 01:18:35.355515 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 01:18:35.358837 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 01:18:35.361508 systemd[1]: Reached target basic.target - Basic System. Apr 30 01:18:35.364513 systemd[1]: System is tainted: cgroupsv1 Apr 30 01:18:35.364708 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 01:18:35.364884 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 01:18:35.377485 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 01:18:35.382421 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 01:18:35.390296 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 01:18:35.396321 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 01:18:35.399867 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 01:18:35.401918 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 01:18:35.407448 jq[1547]: false Apr 30 01:18:35.410353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:18:35.420248 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 01:18:35.426279 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 01:18:35.440129 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 01:18:35.444684 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 01:18:35.453237 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 01:18:35.461227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 01:18:35.471181 extend-filesystems[1550]: Found loop4 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found loop5 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found loop6 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found loop7 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found sda Apr 30 01:18:35.480766 extend-filesystems[1550]: Found sda1 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found sda2 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found sda3 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found usr Apr 30 01:18:35.480766 extend-filesystems[1550]: Found sda4 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found sda6 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found sda7 Apr 30 01:18:35.480766 extend-filesystems[1550]: Found sda9 Apr 30 01:18:35.480766 extend-filesystems[1550]: Checking size of /dev/sda9 Apr 30 01:18:35.503682 coreos-metadata[1545]: Apr 30 01:18:35.474 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 01:18:35.503682 coreos-metadata[1545]: Apr 30 01:18:35.480 INFO Fetch successful Apr 30 01:18:35.503682 coreos-metadata[1545]: Apr 30 01:18:35.481 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 01:18:35.503682 coreos-metadata[1545]: Apr 30 01:18:35.481 INFO Fetch successful Apr 30 01:18:35.478171 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 01:18:35.479565 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 01:18:35.485920 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 01:18:35.491083 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 01:18:35.505870 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 01:18:35.506754 dbus-daemon[1546]: [system] SELinux support is enabled Apr 30 01:18:35.506181 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 01:18:35.518872 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 01:18:35.530604 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 01:18:35.537283 systemd-timesyncd[1537]: Contacted time server 90.187.112.137:123 (0.flatcar.pool.ntp.org). Apr 30 01:18:35.537349 systemd-timesyncd[1537]: Initial clock synchronization to Wed 2025-04-30 01:18:35.253615 UTC. Apr 30 01:18:35.538232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 01:18:35.538485 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 01:18:35.557956 jq[1573]: true Apr 30 01:18:35.569934 update_engine[1571]: I20250430 01:18:35.569288 1571 main.cc:92] Flatcar Update Engine starting Apr 30 01:18:35.591225 update_engine[1571]: I20250430 01:18:35.584904 1571 update_check_scheduler.cc:74] Next update check in 8m56s Apr 30 01:18:35.591337 extend-filesystems[1550]: Resized partition /dev/sda9 Apr 30 01:18:35.598477 (ntainerd)[1596]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 01:18:35.606776 jq[1599]: true Apr 30 01:18:35.618956 extend-filesystems[1600]: resize2fs 1.47.1 (20-May-2024) Apr 30 01:18:35.604061 systemd-logind[1565]: New seat seat0. Apr 30 01:18:35.606781 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 01:18:35.606798 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 01:18:35.636149 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 01:18:35.610547 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 01:18:35.611458 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 01:18:35.611724 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 01:18:35.637970 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 01:18:35.639148 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 01:18:35.639393 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 01:18:35.641421 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 01:18:35.641453 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 01:18:35.649550 systemd[1]: Started update-engine.service - Update Engine. Apr 30 01:18:35.651225 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 01:18:35.652312 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 01:18:35.655656 tar[1587]: linux-arm64/helm Apr 30 01:18:35.720855 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 01:18:35.721949 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 01:18:35.786744 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1258) Apr 30 01:18:35.849491 bash[1642]: Updated "/home/core/.ssh/authorized_keys" Apr 30 01:18:35.856042 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 01:18:35.856961 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 01:18:35.882384 systemd[1]: Starting sshkeys.service... Apr 30 01:18:35.889049 extend-filesystems[1600]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 01:18:35.889049 extend-filesystems[1600]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 01:18:35.889049 extend-filesystems[1600]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 01:18:35.892135 extend-filesystems[1550]: Resized filesystem in /dev/sda9 Apr 30 01:18:35.892135 extend-filesystems[1550]: Found sr0 Apr 30 01:18:35.895347 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 01:18:35.895610 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 01:18:35.914821 containerd[1596]: time="2025-04-30T01:18:35.910971200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 01:18:35.929300 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 01:18:35.938468 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 01:18:35.970395 containerd[1596]: time="2025-04-30T01:18:35.970339920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:35.977517 containerd[1596]: time="2025-04-30T01:18:35.973932640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:35.977729 containerd[1596]: time="2025-04-30T01:18:35.977700640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 01:18:35.977807 containerd[1596]: time="2025-04-30T01:18:35.977794680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 01:18:35.978150 coreos-metadata[1654]: Apr 30 01:18:35.977 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 01:18:35.978693 containerd[1596]: time="2025-04-30T01:18:35.978660440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 01:18:35.978803 containerd[1596]: time="2025-04-30T01:18:35.978787760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:35.979187 containerd[1596]: time="2025-04-30T01:18:35.979162560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:35.979363 coreos-metadata[1654]: Apr 30 01:18:35.979 INFO Fetch successful Apr 30 01:18:35.979462 containerd[1596]: time="2025-04-30T01:18:35.979437080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:35.981484 containerd[1596]: time="2025-04-30T01:18:35.981449240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:35.981854 containerd[1596]: time="2025-04-30T01:18:35.981834080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:35.981944 containerd[1596]: time="2025-04-30T01:18:35.981928560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:35.982121 containerd[1596]: time="2025-04-30T01:18:35.982057240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:35.982478 containerd[1596]: time="2025-04-30T01:18:35.982271200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:35.982627 containerd[1596]: time="2025-04-30T01:18:35.982609040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 01:18:35.982969 unknown[1654]: wrote ssh authorized keys file for user: core Apr 30 01:18:35.983668 containerd[1596]: time="2025-04-30T01:18:35.983283160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 01:18:35.983668 containerd[1596]: time="2025-04-30T01:18:35.983306840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 01:18:35.983668 containerd[1596]: time="2025-04-30T01:18:35.983398440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 01:18:35.983668 containerd[1596]: time="2025-04-30T01:18:35.983452440Z" level=info msg="metadata content store policy set" policy=shared Apr 30 01:18:35.993336 containerd[1596]: time="2025-04-30T01:18:35.991763120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 01:18:35.993336 containerd[1596]: time="2025-04-30T01:18:35.992895480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 01:18:35.993336 containerd[1596]: time="2025-04-30T01:18:35.992986440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 01:18:35.993336 containerd[1596]: time="2025-04-30T01:18:35.993006800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 01:18:35.993336 containerd[1596]: time="2025-04-30T01:18:35.993032200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 01:18:35.993336 containerd[1596]: time="2025-04-30T01:18:35.993207600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995453520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995609520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995630120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995643240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995657160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995669800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995680880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995695920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995791480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995805600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995817320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995829120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995851400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997031 containerd[1596]: time="2025-04-30T01:18:35.995864680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.995877000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.995893000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.995904760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.995921040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.995937920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.995959800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.995973120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.995987760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.996001640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.996033400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.996049040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.996115720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.996143280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.996156280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997343 containerd[1596]: time="2025-04-30T01:18:35.996167800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 01:18:35.997596 containerd[1596]: time="2025-04-30T01:18:35.996275720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 01:18:35.997596 containerd[1596]: time="2025-04-30T01:18:35.996294160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 01:18:35.997596 containerd[1596]: time="2025-04-30T01:18:35.996306600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 01:18:35.997596 containerd[1596]: time="2025-04-30T01:18:35.996320160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 01:18:35.997596 containerd[1596]: time="2025-04-30T01:18:35.996332760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997596 containerd[1596]: time="2025-04-30T01:18:35.996349160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 01:18:35.997596 containerd[1596]: time="2025-04-30T01:18:35.996485200Z" level=info msg="NRI interface is disabled by configuration." Apr 30 01:18:35.997596 containerd[1596]: time="2025-04-30T01:18:35.996499400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 01:18:35.997756 containerd[1596]: time="2025-04-30T01:18:35.996779160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 01:18:35.997756 containerd[1596]: time="2025-04-30T01:18:35.996836960Z" level=info msg="Connect containerd service" Apr 30 01:18:35.997756 containerd[1596]: time="2025-04-30T01:18:35.996867640Z" level=info msg="using legacy CRI server" Apr 30 01:18:35.997756 containerd[1596]: time="2025-04-30T01:18:35.996874960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 01:18:35.997756 containerd[1596]: time="2025-04-30T01:18:35.996959160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.001836438Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002379487Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002419998Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002592617Z" level=info msg="Start subscribing containerd event" Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002631508Z" level=info msg="Start recovering state" Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002694397Z" level=info msg="Start event monitor" Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002712184Z" level=info msg="Start snapshots syncer" Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002721366Z" level=info msg="Start cni network conf syncer for default" Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002727887Z" level=info msg="Start streaming server" Apr 30 01:18:36.004483 containerd[1596]: time="2025-04-30T01:18:36.002838773Z" level=info msg="containerd successfully booted in 0.093035s" Apr 30 01:18:36.038691 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 01:18:36.059034 update-ssh-keys[1662]: Updated "/home/core/.ssh/authorized_keys" Apr 30 01:18:36.059774 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 01:18:36.073609 systemd[1]: Finished sshkeys.service. Apr 30 01:18:36.099838 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 01:18:36.254656 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 01:18:36.289782 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 01:18:36.303454 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 01:18:36.316471 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 01:18:36.316723 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 01:18:36.327254 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 01:18:36.347089 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 01:18:36.356211 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 01:18:36.366472 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 01:18:36.369276 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 01:18:36.453137 tar[1587]: linux-arm64/LICENSE Apr 30 01:18:36.453340 tar[1587]: linux-arm64/README.md Apr 30 01:18:36.468926 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 01:18:36.656252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:18:36.657520 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 01:18:36.659063 systemd[1]: Startup finished in 6.126s (kernel) + 4.648s (userspace) = 10.775s. Apr 30 01:18:36.672954 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:18:37.243937 kubelet[1704]: E0430 01:18:37.243897 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:18:37.265241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:18:37.265461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:18:47.454747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 01:18:47.466225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:18:47.606239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:18:47.611814 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:18:47.662625 kubelet[1730]: E0430 01:18:47.662491 1730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:18:47.665147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:18:47.665435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:18:57.705226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 01:18:57.718335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:18:57.847236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:18:57.857618 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:18:57.914632 kubelet[1751]: E0430 01:18:57.914559 1751 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:18:57.920214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:18:57.920435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:07.955060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 01:19:07.968543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:08.084369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:08.085209 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:08.147296 kubelet[1773]: E0430 01:19:08.147136 1773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:08.151550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:08.151796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:18.205098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 01:19:18.217364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:18.334279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:18.348553 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:18.409237 kubelet[1794]: E0430 01:19:18.409173 1794 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:18.412590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:18.413190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:20.929624 update_engine[1571]: I20250430 01:19:20.929481 1571 update_attempter.cc:509] Updating boot flags... Apr 30 01:19:20.992053 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1812) Apr 30 01:19:21.062127 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1813) Apr 30 01:19:28.455263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 01:19:28.467371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:28.610245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:28.614775 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:28.669481 kubelet[1833]: E0430 01:19:28.669383 1833 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:28.675238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:28.675470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:38.705158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 01:19:38.713441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:38.841288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:38.847297 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:38.894795 kubelet[1854]: E0430 01:19:38.894732 1854 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:38.897601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:38.898624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:48.954922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 01:19:48.965264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:49.112450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:49.139783 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:49.190221 kubelet[1875]: E0430 01:19:49.190157 1875 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:49.192843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:49.193068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:19:59.204973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 01:19:59.219851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:19:59.343251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:19:59.359844 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:19:59.409141 kubelet[1895]: E0430 01:19:59.409046 1895 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:19:59.413362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:19:59.413672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:09.454891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 01:20:09.462299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:09.587315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:09.598799 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:09.648082 kubelet[1916]: E0430 01:20:09.647985 1916 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:09.652205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:09.652739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:15.708750 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 01:20:15.715496 systemd[1]: Started sshd@0-78.47.197.16:22-139.178.68.195:47228.service - OpenSSH per-connection server daemon (139.178.68.195:47228). Apr 30 01:20:16.691701 sshd[1925]: Accepted publickey for core from 139.178.68.195 port 47228 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:16.694671 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:16.705320 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 01:20:16.711632 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 01:20:16.714791 systemd-logind[1565]: New session 1 of user core. Apr 30 01:20:16.728510 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 01:20:16.742568 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 01:20:16.747148 (systemd)[1931]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 01:20:16.858843 systemd[1931]: Queued start job for default target default.target. Apr 30 01:20:16.859716 systemd[1931]: Created slice app.slice - User Application Slice. Apr 30 01:20:16.859753 systemd[1931]: Reached target paths.target - Paths. Apr 30 01:20:16.859766 systemd[1931]: Reached target timers.target - Timers. Apr 30 01:20:16.876191 systemd[1931]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 01:20:16.888307 systemd[1931]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 01:20:16.888384 systemd[1931]: Reached target sockets.target - Sockets. Apr 30 01:20:16.888408 systemd[1931]: Reached target basic.target - Basic System. Apr 30 01:20:16.888476 systemd[1931]: Reached target default.target - Main User Target. Apr 30 01:20:16.888510 systemd[1931]: Startup finished in 134ms. Apr 30 01:20:16.889065 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 01:20:16.893398 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 01:20:17.585492 systemd[1]: Started sshd@1-78.47.197.16:22-139.178.68.195:47230.service - OpenSSH per-connection server daemon (139.178.68.195:47230). Apr 30 01:20:18.565812 sshd[1943]: Accepted publickey for core from 139.178.68.195 port 47230 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:18.567649 sshd[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:18.573726 systemd-logind[1565]: New session 2 of user core. Apr 30 01:20:18.584524 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 01:20:19.252396 sshd[1943]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:19.257693 systemd[1]: sshd@1-78.47.197.16:22-139.178.68.195:47230.service: Deactivated successfully. Apr 30 01:20:19.261618 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. Apr 30 01:20:19.261899 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 01:20:19.263184 systemd-logind[1565]: Removed session 2. Apr 30 01:20:19.418373 systemd[1]: Started sshd@2-78.47.197.16:22-139.178.68.195:47238.service - OpenSSH per-connection server daemon (139.178.68.195:47238). Apr 30 01:20:19.704792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 01:20:19.718435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:19.840258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:19.851726 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:19.900623 kubelet[1965]: E0430 01:20:19.900545 1965 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:19.904268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:19.904738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:20.420471 sshd[1951]: Accepted publickey for core from 139.178.68.195 port 47238 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:20.422394 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:20.429602 systemd-logind[1565]: New session 3 of user core. Apr 30 01:20:20.436565 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 01:20:21.107343 sshd[1951]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:21.112849 systemd[1]: sshd@2-78.47.197.16:22-139.178.68.195:47238.service: Deactivated successfully. Apr 30 01:20:21.116755 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 01:20:21.117647 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Apr 30 01:20:21.118768 systemd-logind[1565]: Removed session 3. Apr 30 01:20:21.280532 systemd[1]: Started sshd@3-78.47.197.16:22-139.178.68.195:47246.service - OpenSSH per-connection server daemon (139.178.68.195:47246). Apr 30 01:20:22.273923 sshd[1980]: Accepted publickey for core from 139.178.68.195 port 47246 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:22.276559 sshd[1980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:22.288279 systemd-logind[1565]: New session 4 of user core. Apr 30 01:20:22.297426 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 01:20:22.967200 sshd[1980]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:22.974150 systemd[1]: sshd@3-78.47.197.16:22-139.178.68.195:47246.service: Deactivated successfully. Apr 30 01:20:22.974201 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Apr 30 01:20:22.977585 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 01:20:22.978849 systemd-logind[1565]: Removed session 4. Apr 30 01:20:23.139527 systemd[1]: Started sshd@4-78.47.197.16:22-139.178.68.195:47250.service - OpenSSH per-connection server daemon (139.178.68.195:47250). Apr 30 01:20:24.118491 sshd[1988]: Accepted publickey for core from 139.178.68.195 port 47250 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:24.121119 sshd[1988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:24.127932 systemd-logind[1565]: New session 5 of user core. Apr 30 01:20:24.137603 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 01:20:24.653481 sudo[1992]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 01:20:24.653774 sudo[1992]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:20:24.668308 sudo[1992]: pam_unix(sudo:session): session closed for user root Apr 30 01:20:24.829404 sshd[1988]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:24.836411 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Apr 30 01:20:24.836892 systemd[1]: sshd@4-78.47.197.16:22-139.178.68.195:47250.service: Deactivated successfully. Apr 30 01:20:24.839174 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 01:20:24.840659 systemd-logind[1565]: Removed session 5. Apr 30 01:20:24.994325 systemd[1]: Started sshd@5-78.47.197.16:22-139.178.68.195:47266.service - OpenSSH per-connection server daemon (139.178.68.195:47266). Apr 30 01:20:25.977948 sshd[1997]: Accepted publickey for core from 139.178.68.195 port 47266 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:25.980143 sshd[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:25.987347 systemd-logind[1565]: New session 6 of user core. Apr 30 01:20:25.997397 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 01:20:26.503061 sudo[2002]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 01:20:26.503355 sudo[2002]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:20:26.507329 sudo[2002]: pam_unix(sudo:session): session closed for user root Apr 30 01:20:26.513335 sudo[2001]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 01:20:26.513759 sudo[2001]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:20:26.537301 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 01:20:26.539619 auditctl[2005]: No rules Apr 30 01:20:26.540499 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 01:20:26.540857 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 01:20:26.545238 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 01:20:26.578600 augenrules[2024]: No rules Apr 30 01:20:26.581165 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 01:20:26.583712 sudo[2001]: pam_unix(sudo:session): session closed for user root Apr 30 01:20:26.745414 sshd[1997]: pam_unix(sshd:session): session closed for user core Apr 30 01:20:26.749813 systemd[1]: sshd@5-78.47.197.16:22-139.178.68.195:47266.service: Deactivated successfully. Apr 30 01:20:26.754399 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 01:20:26.757538 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Apr 30 01:20:26.758761 systemd-logind[1565]: Removed session 6. Apr 30 01:20:26.908352 systemd[1]: Started sshd@6-78.47.197.16:22-139.178.68.195:44178.service - OpenSSH per-connection server daemon (139.178.68.195:44178). Apr 30 01:20:27.900458 sshd[2033]: Accepted publickey for core from 139.178.68.195 port 44178 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:20:27.902765 sshd[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:20:27.907889 systemd-logind[1565]: New session 7 of user core. Apr 30 01:20:27.915513 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 01:20:28.425491 sudo[2037]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 01:20:28.425788 sudo[2037]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 01:20:28.738442 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 01:20:28.741022 (dockerd)[2053]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 01:20:28.999829 dockerd[2053]: time="2025-04-30T01:20:28.999675446Z" level=info msg="Starting up" Apr 30 01:20:29.073510 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport177286411-merged.mount: Deactivated successfully. Apr 30 01:20:29.099598 dockerd[2053]: time="2025-04-30T01:20:29.099501277Z" level=info msg="Loading containers: start." Apr 30 01:20:29.208281 kernel: Initializing XFRM netlink socket Apr 30 01:20:29.299869 systemd-networkd[1245]: docker0: Link UP Apr 30 01:20:29.316960 dockerd[2053]: time="2025-04-30T01:20:29.316869890Z" level=info msg="Loading containers: done." Apr 30 01:20:29.336995 dockerd[2053]: time="2025-04-30T01:20:29.336932331Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 01:20:29.337169 dockerd[2053]: time="2025-04-30T01:20:29.337079537Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 01:20:29.337204 dockerd[2053]: time="2025-04-30T01:20:29.337188141Z" level=info msg="Daemon has completed initialization" Apr 30 01:20:29.371043 dockerd[2053]: time="2025-04-30T01:20:29.369879222Z" level=info msg="API listen on /run/docker.sock" Apr 30 01:20:29.370274 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 01:20:29.955060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 01:20:29.969349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:30.070840 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2132367668-merged.mount: Deactivated successfully. Apr 30 01:20:30.082656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:30.095722 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:30.150715 kubelet[2206]: E0430 01:20:30.150650 2206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:30.153592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:30.153933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:30.634223 containerd[1596]: time="2025-04-30T01:20:30.633925644Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 01:20:31.330252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174992324.mount: Deactivated successfully. Apr 30 01:20:33.378042 containerd[1596]: time="2025-04-30T01:20:33.376005882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:33.378042 containerd[1596]: time="2025-04-30T01:20:33.377213886Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794242" Apr 30 01:20:33.378921 containerd[1596]: time="2025-04-30T01:20:33.378880147Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:33.383464 containerd[1596]: time="2025-04-30T01:20:33.383419792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:33.385337 containerd[1596]: time="2025-04-30T01:20:33.385291421Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.751315575s" Apr 30 01:20:33.385337 containerd[1596]: time="2025-04-30T01:20:33.385338982Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 01:20:33.411733 containerd[1596]: time="2025-04-30T01:20:33.411694983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 01:20:37.130101 containerd[1596]: time="2025-04-30T01:20:37.129978158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:37.132382 containerd[1596]: time="2025-04-30T01:20:37.132002709Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855570" Apr 30 01:20:37.133725 containerd[1596]: time="2025-04-30T01:20:37.133657087Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:37.139681 containerd[1596]: time="2025-04-30T01:20:37.139602816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:37.141463 containerd[1596]: time="2025-04-30T01:20:37.141409079Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 3.729667414s" Apr 30 01:20:37.141463 containerd[1596]: time="2025-04-30T01:20:37.141456281Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 01:20:37.165213 containerd[1596]: time="2025-04-30T01:20:37.164970347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 01:20:38.805690 containerd[1596]: time="2025-04-30T01:20:38.804457793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:38.805690 containerd[1596]: time="2025-04-30T01:20:38.805539071Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263965" Apr 30 01:20:38.806316 containerd[1596]: time="2025-04-30T01:20:38.806212455Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:38.811834 containerd[1596]: time="2025-04-30T01:20:38.811781409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:38.813641 containerd[1596]: time="2025-04-30T01:20:38.813570591Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.648556402s" Apr 30 01:20:38.813641 containerd[1596]: time="2025-04-30T01:20:38.813627633Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 01:20:38.841600 containerd[1596]: time="2025-04-30T01:20:38.841558566Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 01:20:40.166913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716124850.mount: Deactivated successfully. Apr 30 01:20:40.169449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 01:20:40.178245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:40.307301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:40.308791 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:40.352885 kubelet[2314]: E0430 01:20:40.352618 2314 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:40.355146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:40.355337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:40.588058 containerd[1596]: time="2025-04-30T01:20:40.587971284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:40.589628 containerd[1596]: time="2025-04-30T01:20:40.589571139Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775731" Apr 30 01:20:40.591022 containerd[1596]: time="2025-04-30T01:20:40.590949266Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:40.594908 containerd[1596]: time="2025-04-30T01:20:40.594073213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:40.594908 containerd[1596]: time="2025-04-30T01:20:40.594778077Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.75317451s" Apr 30 01:20:40.594908 containerd[1596]: time="2025-04-30T01:20:40.594812839Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 01:20:40.616676 containerd[1596]: time="2025-04-30T01:20:40.616635506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 01:20:41.245511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369109178.mount: Deactivated successfully. Apr 30 01:20:42.528322 containerd[1596]: time="2025-04-30T01:20:42.528230217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:42.530470 containerd[1596]: time="2025-04-30T01:20:42.530036238Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Apr 30 01:20:42.532130 containerd[1596]: time="2025-04-30T01:20:42.532086107Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:42.538735 containerd[1596]: time="2025-04-30T01:20:42.538639928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:42.541121 containerd[1596]: time="2025-04-30T01:20:42.540849122Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.92401609s" Apr 30 01:20:42.541121 containerd[1596]: time="2025-04-30T01:20:42.540920285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 01:20:42.564565 containerd[1596]: time="2025-04-30T01:20:42.564518640Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 01:20:43.114694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450063317.mount: Deactivated successfully. Apr 30 01:20:43.127847 containerd[1596]: time="2025-04-30T01:20:43.125735924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:43.130084 containerd[1596]: time="2025-04-30T01:20:43.130035108Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Apr 30 01:20:43.132600 containerd[1596]: time="2025-04-30T01:20:43.131407594Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:43.133917 containerd[1596]: time="2025-04-30T01:20:43.133847715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:43.135721 containerd[1596]: time="2025-04-30T01:20:43.135657416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 570.805684ms" Apr 30 01:20:43.135721 containerd[1596]: time="2025-04-30T01:20:43.135714338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 01:20:43.167900 containerd[1596]: time="2025-04-30T01:20:43.167858173Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 01:20:43.805367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917920733.mount: Deactivated successfully. Apr 30 01:20:49.853052 containerd[1596]: time="2025-04-30T01:20:49.852061464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:49.855320 containerd[1596]: time="2025-04-30T01:20:49.854618666Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Apr 30 01:20:49.857050 containerd[1596]: time="2025-04-30T01:20:49.856399363Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:49.861202 containerd[1596]: time="2025-04-30T01:20:49.861161955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:20:49.863213 containerd[1596]: time="2025-04-30T01:20:49.863174100Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 6.695271766s" Apr 30 01:20:49.863346 containerd[1596]: time="2025-04-30T01:20:49.863331745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 01:20:50.455411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 30 01:20:50.466265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:50.580291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:50.585118 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 01:20:50.638358 kubelet[2450]: E0430 01:20:50.638291 2450 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 01:20:50.642221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 01:20:50.642452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 01:20:56.022428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:56.036677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:56.070522 systemd[1]: Reloading requested from client PID 2515 ('systemctl') (unit session-7.scope)... Apr 30 01:20:56.070700 systemd[1]: Reloading... Apr 30 01:20:56.174037 zram_generator::config[2556]: No configuration found. Apr 30 01:20:56.296215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:20:56.367835 systemd[1]: Reloading finished in 296 ms. Apr 30 01:20:56.411889 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 01:20:56.412180 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 01:20:56.412742 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:56.420876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:20:56.552278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:20:56.557209 (kubelet)[2612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 01:20:56.613176 kubelet[2612]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:20:56.613539 kubelet[2612]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 01:20:56.613591 kubelet[2612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:20:56.613863 kubelet[2612]: I0430 01:20:56.613815 2612 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 01:20:57.245354 kubelet[2612]: I0430 01:20:57.245231 2612 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 01:20:57.245354 kubelet[2612]: I0430 01:20:57.245266 2612 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 01:20:57.245614 kubelet[2612]: I0430 01:20:57.245518 2612 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 01:20:57.267026 kubelet[2612]: I0430 01:20:57.266514 2612 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 01:20:57.267026 kubelet[2612]: E0430 01:20:57.266981 2612 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.47.197.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.285046 kubelet[2612]: I0430 01:20:57.284990 2612 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 01:20:57.287263 kubelet[2612]: I0430 01:20:57.287152 2612 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 01:20:57.287455 kubelet[2612]: I0430 01:20:57.287218 2612 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-5-cafd7e9e76","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 01:20:57.287566 kubelet[2612]: I0430 01:20:57.287492 2612 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 01:20:57.287566 kubelet[2612]: I0430 01:20:57.287504 2612 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 01:20:57.287817 kubelet[2612]: I0430 01:20:57.287786 2612 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:20:57.290107 kubelet[2612]: I0430 01:20:57.289162 2612 kubelet.go:400] "Attempting to sync node with API server" Apr 30 01:20:57.290107 kubelet[2612]: I0430 01:20:57.289196 2612 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 01:20:57.290107 kubelet[2612]: I0430 01:20:57.289466 2612 kubelet.go:312] "Adding apiserver pod source" Apr 30 01:20:57.290107 kubelet[2612]: I0430 01:20:57.289558 2612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 01:20:57.290965 kubelet[2612]: W0430 01:20:57.290906 2612 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.197.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-5-cafd7e9e76&limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.291058 kubelet[2612]: E0430 01:20:57.290972 2612 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.197.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-5-cafd7e9e76&limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.291104 kubelet[2612]: W0430 01:20:57.291055 2612 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.197.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.291104 kubelet[2612]: E0430 01:20:57.291082 2612 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.197.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.291601 kubelet[2612]: I0430 01:20:57.291572 2612 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 01:20:57.291976 kubelet[2612]: I0430 01:20:57.291951 2612 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 01:20:57.292099 kubelet[2612]: W0430 01:20:57.292080 2612 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 01:20:57.293277 kubelet[2612]: I0430 01:20:57.293249 2612 server.go:1264] "Started kubelet" Apr 30 01:20:57.297905 kubelet[2612]: I0430 01:20:57.297865 2612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 01:20:57.302097 kubelet[2612]: E0430 01:20:57.301801 2612 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.197.16:6443/api/v1/namespaces/default/events\": dial tcp 78.47.197.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-5-cafd7e9e76.183af3f9394cbe45 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-5-cafd7e9e76,UID:ci-4081-3-3-5-cafd7e9e76,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-5-cafd7e9e76,},FirstTimestamp:2025-04-30 01:20:57.293225541 +0000 UTC m=+0.731421617,LastTimestamp:2025-04-30 01:20:57.293225541 +0000 UTC m=+0.731421617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-5-cafd7e9e76,}" Apr 30 01:20:57.307078 kubelet[2612]: I0430 01:20:57.305133 2612 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 01:20:57.307078 kubelet[2612]: I0430 01:20:57.306116 2612 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 01:20:57.308093 kubelet[2612]: I0430 01:20:57.308064 2612 server.go:455] "Adding debug handlers to kubelet server" Apr 30 01:20:57.309150 kubelet[2612]: I0430 01:20:57.309120 2612 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 01:20:57.309238 kubelet[2612]: I0430 01:20:57.309198 2612 reconciler.go:26] "Reconciler: start to sync state" Apr 30 01:20:57.309472 kubelet[2612]: I0430 01:20:57.309421 2612 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 01:20:57.309851 kubelet[2612]: I0430 01:20:57.309819 2612 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 01:20:57.310354 kubelet[2612]: E0430 01:20:57.310314 2612 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.197.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-5-cafd7e9e76?timeout=10s\": dial tcp 78.47.197.16:6443: connect: connection refused" interval="200ms" Apr 30 01:20:57.311515 kubelet[2612]: I0430 01:20:57.311485 2612 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 01:20:57.313950 kubelet[2612]: W0430 01:20:57.313877 2612 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.197.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.314079 kubelet[2612]: E0430 01:20:57.314066 2612 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.197.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.314404 kubelet[2612]: I0430 01:20:57.314387 2612 factory.go:221] Registration of the containerd container factory successfully Apr 30 01:20:57.314498 kubelet[2612]: I0430 01:20:57.314487 2612 factory.go:221] Registration of the systemd container factory successfully Apr 30 01:20:57.324403 kubelet[2612]: I0430 01:20:57.324332 2612 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 01:20:57.327077 kubelet[2612]: I0430 01:20:57.326065 2612 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 01:20:57.327077 kubelet[2612]: I0430 01:20:57.326244 2612 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 01:20:57.327077 kubelet[2612]: I0430 01:20:57.326269 2612 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 01:20:57.327077 kubelet[2612]: E0430 01:20:57.326327 2612 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 01:20:57.331063 kubelet[2612]: E0430 01:20:57.330952 2612 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 01:20:57.339759 kubelet[2612]: W0430 01:20:57.339544 2612 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.197.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.339759 kubelet[2612]: E0430 01:20:57.339625 2612 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.197.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:57.366601 kubelet[2612]: I0430 01:20:57.366568 2612 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 01:20:57.366601 kubelet[2612]: I0430 01:20:57.366589 2612 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 01:20:57.366601 kubelet[2612]: I0430 01:20:57.366610 2612 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:20:57.369957 kubelet[2612]: I0430 01:20:57.369570 2612 policy_none.go:49] "None policy: Start" Apr 30 01:20:57.370326 kubelet[2612]: I0430 01:20:57.370304 2612 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 01:20:57.370326 kubelet[2612]: I0430 01:20:57.370337 2612 state_mem.go:35] "Initializing new in-memory state store" Apr 30 01:20:57.376520 kubelet[2612]: I0430 01:20:57.376477 2612 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 01:20:57.376889 kubelet[2612]: I0430 01:20:57.376707 2612 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 01:20:57.376889 kubelet[2612]: I0430 01:20:57.376879 2612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 01:20:57.380576 kubelet[2612]: E0430 01:20:57.380512 2612 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-5-cafd7e9e76\" not found" Apr 30 01:20:57.407825 kubelet[2612]: I0430 01:20:57.407793 2612 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.408576 kubelet[2612]: E0430 01:20:57.408521 2612 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.197.16:6443/api/v1/nodes\": dial tcp 78.47.197.16:6443: connect: connection refused" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.427909 kubelet[2612]: I0430 01:20:57.427261 2612 topology_manager.go:215] "Topology Admit Handler" podUID="4d07f70aa0952ca4fa7f61fec4f71b3a" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.430359 kubelet[2612]: I0430 01:20:57.430053 2612 topology_manager.go:215] "Topology Admit Handler" podUID="a6dd880b40e09859de204b214094ef97" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.432515 kubelet[2612]: I0430 01:20:57.432481 2612 topology_manager.go:215] "Topology Admit Handler" podUID="8f061fd996f3b73baf935e0530ba4fe6" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.511764 kubelet[2612]: E0430 01:20:57.511599 2612 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.197.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-5-cafd7e9e76?timeout=10s\": dial tcp 78.47.197.16:6443: connect: connection refused" interval="400ms" Apr 30 01:20:57.610952 kubelet[2612]: I0430 01:20:57.610622 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d07f70aa0952ca4fa7f61fec4f71b3a-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-5-cafd7e9e76\" (UID: \"4d07f70aa0952ca4fa7f61fec4f71b3a\") " pod="kube-system/kube-apiserver-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.610952 kubelet[2612]: I0430 01:20:57.610669 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d07f70aa0952ca4fa7f61fec4f71b3a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-5-cafd7e9e76\" (UID: \"4d07f70aa0952ca4fa7f61fec4f71b3a\") " pod="kube-system/kube-apiserver-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.610952 kubelet[2612]: I0430 01:20:57.610692 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.610952 kubelet[2612]: I0430 01:20:57.610710 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f061fd996f3b73baf935e0530ba4fe6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-5-cafd7e9e76\" (UID: \"8f061fd996f3b73baf935e0530ba4fe6\") " pod="kube-system/kube-scheduler-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.610952 kubelet[2612]: I0430 01:20:57.610733 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d07f70aa0952ca4fa7f61fec4f71b3a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-5-cafd7e9e76\" (UID: \"4d07f70aa0952ca4fa7f61fec4f71b3a\") " pod="kube-system/kube-apiserver-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.611424 kubelet[2612]: I0430 01:20:57.610751 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.611424 kubelet[2612]: I0430 01:20:57.610769 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.611424 kubelet[2612]: I0430 01:20:57.610787 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.611424 kubelet[2612]: I0430 01:20:57.610808 2612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.613300 kubelet[2612]: I0430 01:20:57.612525 2612 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.613300 kubelet[2612]: E0430 01:20:57.612900 2612 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.197.16:6443/api/v1/nodes\": dial tcp 78.47.197.16:6443: connect: connection refused" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:57.740284 containerd[1596]: time="2025-04-30T01:20:57.740181954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-5-cafd7e9e76,Uid:4d07f70aa0952ca4fa7f61fec4f71b3a,Namespace:kube-system,Attempt:0,}" Apr 30 01:20:57.744653 containerd[1596]: time="2025-04-30T01:20:57.744183236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-5-cafd7e9e76,Uid:8f061fd996f3b73baf935e0530ba4fe6,Namespace:kube-system,Attempt:0,}" Apr 30 01:20:57.748030 containerd[1596]: time="2025-04-30T01:20:57.747934591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-5-cafd7e9e76,Uid:a6dd880b40e09859de204b214094ef97,Namespace:kube-system,Attempt:0,}" Apr 30 01:20:57.913178 kubelet[2612]: E0430 01:20:57.912918 2612 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.197.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-5-cafd7e9e76?timeout=10s\": dial tcp 78.47.197.16:6443: connect: connection refused" interval="800ms" Apr 30 01:20:58.016362 kubelet[2612]: I0430 01:20:58.015916 2612 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:58.016768 kubelet[2612]: E0430 01:20:58.016724 2612 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.197.16:6443/api/v1/nodes\": dial tcp 78.47.197.16:6443: connect: connection refused" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:20:58.168931 kubelet[2612]: W0430 01:20:58.168758 2612 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.197.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:58.168931 kubelet[2612]: E0430 01:20:58.168830 2612 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.197.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:58.297573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789666021.mount: Deactivated successfully. Apr 30 01:20:58.306291 containerd[1596]: time="2025-04-30T01:20:58.306208355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:58.307585 containerd[1596]: time="2025-04-30T01:20:58.307515955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 01:20:58.309346 containerd[1596]: time="2025-04-30T01:20:58.309263008Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:58.310983 containerd[1596]: time="2025-04-30T01:20:58.310919658Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:58.312356 containerd[1596]: time="2025-04-30T01:20:58.312005131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:58.312356 containerd[1596]: time="2025-04-30T01:20:58.312212298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 01:20:58.313522 containerd[1596]: time="2025-04-30T01:20:58.313476816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 01:20:58.315897 containerd[1596]: time="2025-04-30T01:20:58.315828447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 01:20:58.320040 containerd[1596]: time="2025-04-30T01:20:58.318396926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 574.076885ms" Apr 30 01:20:58.320210 containerd[1596]: time="2025-04-30T01:20:58.319999214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.673456ms" Apr 30 01:20:58.321123 containerd[1596]: time="2025-04-30T01:20:58.321087167Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.969291ms" Apr 30 01:20:58.352855 kubelet[2612]: W0430 01:20:58.352777 2612 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.197.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:58.353106 kubelet[2612]: E0430 01:20:58.353069 2612 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.197.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:58.420931 kubelet[2612]: W0430 01:20:58.420734 2612 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.197.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:58.420931 kubelet[2612]: E0430 01:20:58.420814 2612 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.197.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:58.454114 containerd[1596]: time="2025-04-30T01:20:58.453654115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:20:58.454114 containerd[1596]: time="2025-04-30T01:20:58.453784959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:20:58.454114 containerd[1596]: time="2025-04-30T01:20:58.453805000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:58.454768 containerd[1596]: time="2025-04-30T01:20:58.454513741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:20:58.454768 containerd[1596]: time="2025-04-30T01:20:58.454574743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:20:58.454768 containerd[1596]: time="2025-04-30T01:20:58.454593544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:58.454768 containerd[1596]: time="2025-04-30T01:20:58.454699747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:58.455386 containerd[1596]: time="2025-04-30T01:20:58.455135560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:20:58.455386 containerd[1596]: time="2025-04-30T01:20:58.455190482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:20:58.455386 containerd[1596]: time="2025-04-30T01:20:58.455240203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:58.455386 containerd[1596]: time="2025-04-30T01:20:58.455331326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:58.456265 containerd[1596]: time="2025-04-30T01:20:58.454007446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:20:58.534463 containerd[1596]: time="2025-04-30T01:20:58.534416929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-5-cafd7e9e76,Uid:4d07f70aa0952ca4fa7f61fec4f71b3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb811e82977c5ac1dfd87a7b779a06334f5b3bb90da2b47217bb9a559a06741b\"" Apr 30 01:20:58.543431 containerd[1596]: time="2025-04-30T01:20:58.543218117Z" level=info msg="CreateContainer within sandbox \"cb811e82977c5ac1dfd87a7b779a06334f5b3bb90da2b47217bb9a559a06741b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 01:20:58.546120 containerd[1596]: time="2025-04-30T01:20:58.545773354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-5-cafd7e9e76,Uid:8f061fd996f3b73baf935e0530ba4fe6,Namespace:kube-system,Attempt:0,} returns sandbox id \"48a9ce14a9a0c0ed1202278e9a22bddefb92807358bd05ed654ebb7f7711baad\"" Apr 30 01:20:58.556389 containerd[1596]: time="2025-04-30T01:20:58.556102388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-5-cafd7e9e76,Uid:a6dd880b40e09859de204b214094ef97,Namespace:kube-system,Attempt:0,} returns sandbox id \"034cd43b037ccf7bb04cfd272368f2088e00cf7f518b8f7f2391dbb872857945\"" Apr 30 01:20:58.560282 containerd[1596]: time="2025-04-30T01:20:58.559794820Z" level=info msg="CreateContainer within sandbox \"48a9ce14a9a0c0ed1202278e9a22bddefb92807358bd05ed654ebb7f7711baad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 01:20:58.562758 containerd[1596]: time="2025-04-30T01:20:58.562713269Z" level=info msg="CreateContainer within sandbox \"034cd43b037ccf7bb04cfd272368f2088e00cf7f518b8f7f2391dbb872857945\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 01:20:58.581499 containerd[1596]: time="2025-04-30T01:20:58.581449398Z" level=info msg="CreateContainer within sandbox \"cb811e82977c5ac1dfd87a7b779a06334f5b3bb90da2b47217bb9a559a06741b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ea1e10c9205bb1730c448041715d5c8eb6b7a6728ecbb5e64e302cc937faceb\"" Apr 30 01:20:58.583114 containerd[1596]: time="2025-04-30T01:20:58.582958764Z" level=info msg="StartContainer for \"8ea1e10c9205bb1730c448041715d5c8eb6b7a6728ecbb5e64e302cc937faceb\"" Apr 30 01:20:58.588980 containerd[1596]: time="2025-04-30T01:20:58.588852423Z" level=info msg="CreateContainer within sandbox \"48a9ce14a9a0c0ed1202278e9a22bddefb92807358bd05ed654ebb7f7711baad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"837738ff957ff8201da680dfebc4f5435e2c365d91f43c4e953505df322783fd\"" Apr 30 01:20:58.589669 containerd[1596]: time="2025-04-30T01:20:58.589567765Z" level=info msg="StartContainer for \"837738ff957ff8201da680dfebc4f5435e2c365d91f43c4e953505df322783fd\"" Apr 30 01:20:58.591867 containerd[1596]: time="2025-04-30T01:20:58.591820593Z" level=info msg="CreateContainer within sandbox \"034cd43b037ccf7bb04cfd272368f2088e00cf7f518b8f7f2391dbb872857945\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f88d9cf20ffdd6334426d34104c315811132aa495f8d6033074fd8da124879d\"" Apr 30 01:20:58.594773 containerd[1596]: time="2025-04-30T01:20:58.593703251Z" level=info msg="StartContainer for \"4f88d9cf20ffdd6334426d34104c315811132aa495f8d6033074fd8da124879d\"" Apr 30 01:20:58.627143 kubelet[2612]: W0430 01:20:58.627070 2612 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.197.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-5-cafd7e9e76&limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:58.628288 kubelet[2612]: E0430 01:20:58.627490 2612 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.197.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-5-cafd7e9e76&limit=500&resourceVersion=0": dial tcp 78.47.197.16:6443: connect: connection refused Apr 30 01:20:58.691901 containerd[1596]: time="2025-04-30T01:20:58.691775511Z" level=info msg="StartContainer for \"8ea1e10c9205bb1730c448041715d5c8eb6b7a6728ecbb5e64e302cc937faceb\" returns successfully" Apr 30 01:20:58.706096 containerd[1596]: time="2025-04-30T01:20:58.703925480Z" level=info msg="StartContainer for \"4f88d9cf20ffdd6334426d34104c315811132aa495f8d6033074fd8da124879d\" returns successfully" Apr 30 01:20:58.713647 kubelet[2612]: E0430 01:20:58.713579 2612 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.197.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-5-cafd7e9e76?timeout=10s\": dial tcp 78.47.197.16:6443: connect: connection refused" interval="1.6s" Apr 30 01:20:58.714061 containerd[1596]: time="2025-04-30T01:20:58.713982185Z" level=info msg="StartContainer for \"837738ff957ff8201da680dfebc4f5435e2c365d91f43c4e953505df322783fd\" returns successfully" Apr 30 01:20:58.822205 kubelet[2612]: I0430 01:20:58.822127 2612 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:01.036990 kubelet[2612]: E0430 01:21:01.036736 2612 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-5-cafd7e9e76\" not found" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:01.191641 kubelet[2612]: I0430 01:21:01.191475 2612 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:01.296101 kubelet[2612]: I0430 01:21:01.295955 2612 apiserver.go:52] "Watching apiserver" Apr 30 01:21:01.311049 kubelet[2612]: I0430 01:21:01.310139 2612 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 01:21:03.506174 systemd[1]: Reloading requested from client PID 2882 ('systemctl') (unit session-7.scope)... Apr 30 01:21:03.506190 systemd[1]: Reloading... Apr 30 01:21:03.640048 zram_generator::config[2931]: No configuration found. Apr 30 01:21:03.745190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 01:21:03.824782 systemd[1]: Reloading finished in 318 ms. Apr 30 01:21:03.862851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:21:03.863581 kubelet[2612]: E0430 01:21:03.862872 2612 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-3-3-5-cafd7e9e76.183af3f9394cbe45 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-5-cafd7e9e76,UID:ci-4081-3-3-5-cafd7e9e76,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-5-cafd7e9e76,},FirstTimestamp:2025-04-30 01:20:57.293225541 +0000 UTC m=+0.731421617,LastTimestamp:2025-04-30 01:20:57.293225541 +0000 UTC m=+0.731421617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-5-cafd7e9e76,}" Apr 30 01:21:03.880665 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 01:21:03.882522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:21:03.889525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 01:21:04.038282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 01:21:04.052042 (kubelet)[2977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 01:21:04.105447 kubelet[2977]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:21:04.105447 kubelet[2977]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 01:21:04.105447 kubelet[2977]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 01:21:04.105824 kubelet[2977]: I0430 01:21:04.105426 2977 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 01:21:04.111863 kubelet[2977]: I0430 01:21:04.111826 2977 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 01:21:04.111863 kubelet[2977]: I0430 01:21:04.111855 2977 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 01:21:04.112119 kubelet[2977]: I0430 01:21:04.112101 2977 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 01:21:04.113863 kubelet[2977]: I0430 01:21:04.113802 2977 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 01:21:04.115593 kubelet[2977]: I0430 01:21:04.115549 2977 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 01:21:04.125803 kubelet[2977]: I0430 01:21:04.124241 2977 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 01:21:04.125803 kubelet[2977]: I0430 01:21:04.124823 2977 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 01:21:04.125803 kubelet[2977]: I0430 01:21:04.124863 2977 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-5-cafd7e9e76","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 01:21:04.125803 kubelet[2977]: I0430 01:21:04.125218 2977 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 01:21:04.126191 kubelet[2977]: I0430 01:21:04.125231 2977 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 01:21:04.126191 kubelet[2977]: I0430 01:21:04.125304 2977 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:21:04.126191 kubelet[2977]: I0430 01:21:04.125430 2977 kubelet.go:400] "Attempting to sync node with API server" Apr 30 01:21:04.126191 kubelet[2977]: I0430 01:21:04.125444 2977 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 01:21:04.126191 kubelet[2977]: I0430 01:21:04.125481 2977 kubelet.go:312] "Adding apiserver pod source" Apr 30 01:21:04.126191 kubelet[2977]: I0430 01:21:04.125500 2977 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 01:21:04.134629 kubelet[2977]: I0430 01:21:04.134355 2977 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 01:21:04.136278 kubelet[2977]: I0430 01:21:04.136246 2977 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 01:21:04.136709 kubelet[2977]: I0430 01:21:04.136689 2977 server.go:1264] "Started kubelet" Apr 30 01:21:04.139103 kubelet[2977]: I0430 01:21:04.139072 2977 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 01:21:04.148078 kubelet[2977]: I0430 01:21:04.147490 2977 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 01:21:04.151216 kubelet[2977]: I0430 01:21:04.151175 2977 server.go:455] "Adding debug handlers to kubelet server" Apr 30 01:21:04.153499 kubelet[2977]: I0430 01:21:04.153439 2977 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 01:21:04.155406 kubelet[2977]: I0430 01:21:04.154505 2977 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 01:21:04.158194 kubelet[2977]: I0430 01:21:04.158167 2977 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 01:21:04.164190 kubelet[2977]: I0430 01:21:04.164161 2977 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 01:21:04.164484 kubelet[2977]: I0430 01:21:04.164471 2977 reconciler.go:26] "Reconciler: start to sync state" Apr 30 01:21:04.168291 kubelet[2977]: I0430 01:21:04.168255 2977 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 01:21:04.171247 kubelet[2977]: I0430 01:21:04.171217 2977 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 01:21:04.171402 kubelet[2977]: I0430 01:21:04.171391 2977 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 01:21:04.171465 kubelet[2977]: I0430 01:21:04.171458 2977 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 01:21:04.171853 kubelet[2977]: E0430 01:21:04.171540 2977 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 01:21:04.179626 kubelet[2977]: I0430 01:21:04.179598 2977 factory.go:221] Registration of the systemd container factory successfully Apr 30 01:21:04.180713 kubelet[2977]: I0430 01:21:04.179961 2977 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 01:21:04.181562 kubelet[2977]: E0430 01:21:04.181452 2977 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 01:21:04.186453 kubelet[2977]: I0430 01:21:04.186410 2977 factory.go:221] Registration of the containerd container factory successfully Apr 30 01:21:04.258514 kubelet[2977]: I0430 01:21:04.258479 2977 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 01:21:04.258514 kubelet[2977]: I0430 01:21:04.258499 2977 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 01:21:04.258514 kubelet[2977]: I0430 01:21:04.258522 2977 state_mem.go:36] "Initialized new in-memory state store" Apr 30 01:21:04.258792 kubelet[2977]: I0430 01:21:04.258678 2977 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 01:21:04.258792 kubelet[2977]: I0430 01:21:04.258688 2977 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 01:21:04.258792 kubelet[2977]: I0430 01:21:04.258705 2977 policy_none.go:49] "None policy: Start" Apr 30 01:21:04.259877 kubelet[2977]: I0430 01:21:04.259778 2977 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 01:21:04.259877 kubelet[2977]: I0430 01:21:04.259809 2977 state_mem.go:35] "Initializing new in-memory state store" Apr 30 01:21:04.260281 kubelet[2977]: I0430 01:21:04.260141 2977 state_mem.go:75] "Updated machine memory state" Apr 30 01:21:04.264091 kubelet[2977]: I0430 01:21:04.263041 2977 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.266469 kubelet[2977]: I0430 01:21:04.265227 2977 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 01:21:04.266469 kubelet[2977]: I0430 01:21:04.265425 2977 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 01:21:04.266469 kubelet[2977]: I0430 01:21:04.265529 2977 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 01:21:04.271722 kubelet[2977]: I0430 01:21:04.271664 2977 topology_manager.go:215] "Topology Admit Handler" podUID="a6dd880b40e09859de204b214094ef97" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.271823 kubelet[2977]: I0430 01:21:04.271787 2977 topology_manager.go:215] "Topology Admit Handler" podUID="8f061fd996f3b73baf935e0530ba4fe6" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.271876 kubelet[2977]: I0430 01:21:04.271827 2977 topology_manager.go:215] "Topology Admit Handler" podUID="4d07f70aa0952ca4fa7f61fec4f71b3a" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.296829 kubelet[2977]: I0430 01:21:04.296447 2977 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.296829 kubelet[2977]: I0430 01:21:04.296551 2977 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.365900 kubelet[2977]: I0430 01:21:04.365741 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d07f70aa0952ca4fa7f61fec4f71b3a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-5-cafd7e9e76\" (UID: \"4d07f70aa0952ca4fa7f61fec4f71b3a\") " pod="kube-system/kube-apiserver-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.365900 kubelet[2977]: I0430 01:21:04.365789 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.365900 kubelet[2977]: I0430 01:21:04.365813 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f061fd996f3b73baf935e0530ba4fe6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-5-cafd7e9e76\" (UID: \"8f061fd996f3b73baf935e0530ba4fe6\") " pod="kube-system/kube-scheduler-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.365900 kubelet[2977]: I0430 01:21:04.365830 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d07f70aa0952ca4fa7f61fec4f71b3a-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-5-cafd7e9e76\" (UID: \"4d07f70aa0952ca4fa7f61fec4f71b3a\") " pod="kube-system/kube-apiserver-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.365900 kubelet[2977]: I0430 01:21:04.365851 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d07f70aa0952ca4fa7f61fec4f71b3a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-5-cafd7e9e76\" (UID: \"4d07f70aa0952ca4fa7f61fec4f71b3a\") " pod="kube-system/kube-apiserver-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.366825 kubelet[2977]: I0430 01:21:04.365865 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.366825 kubelet[2977]: I0430 01:21:04.366579 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.366825 kubelet[2977]: I0430 01:21:04.366713 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:04.366825 kubelet[2977]: I0430 01:21:04.366737 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6dd880b40e09859de204b214094ef97-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-5-cafd7e9e76\" (UID: \"a6dd880b40e09859de204b214094ef97\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:05.136087 kubelet[2977]: I0430 01:21:05.135969 2977 apiserver.go:52] "Watching apiserver" Apr 30 01:21:05.165228 kubelet[2977]: I0430 01:21:05.165183 2977 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 01:21:05.198430 kubelet[2977]: I0430 01:21:05.198350 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-5-cafd7e9e76" podStartSLOduration=1.198329607 podStartE2EDuration="1.198329607s" podCreationTimestamp="2025-04-30 01:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:05.196524274 +0000 UTC m=+1.140216669" watchObservedRunningTime="2025-04-30 01:21:05.198329607 +0000 UTC m=+1.142022042" Apr 30 01:21:05.217435 kubelet[2977]: I0430 01:21:05.217308 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-5-cafd7e9e76" podStartSLOduration=1.217285124 podStartE2EDuration="1.217285124s" podCreationTimestamp="2025-04-30 01:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:05.216976515 +0000 UTC m=+1.160668990" watchObservedRunningTime="2025-04-30 01:21:05.217285124 +0000 UTC m=+1.160977559" Apr 30 01:21:05.248576 kubelet[2977]: I0430 01:21:05.248509 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-5-cafd7e9e76" podStartSLOduration=1.248489441 podStartE2EDuration="1.248489441s" podCreationTimestamp="2025-04-30 01:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:05.233674246 +0000 UTC m=+1.177366681" watchObservedRunningTime="2025-04-30 01:21:05.248489441 +0000 UTC m=+1.192181876" Apr 30 01:21:09.548732 sudo[2037]: pam_unix(sudo:session): session closed for user root Apr 30 01:21:09.709176 sshd[2033]: pam_unix(sshd:session): session closed for user core Apr 30 01:21:09.715810 systemd[1]: sshd@6-78.47.197.16:22-139.178.68.195:44178.service: Deactivated successfully. Apr 30 01:21:09.719379 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 01:21:09.719831 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Apr 30 01:21:09.722832 systemd-logind[1565]: Removed session 7. Apr 30 01:21:18.688429 kubelet[2977]: I0430 01:21:18.688164 2977 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 01:21:18.690727 containerd[1596]: time="2025-04-30T01:21:18.690262138Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 01:21:18.691716 kubelet[2977]: I0430 01:21:18.690651 2977 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 01:21:19.339767 kubelet[2977]: I0430 01:21:19.339724 2977 topology_manager.go:215] "Topology Admit Handler" podUID="201e5452-04d7-4340-a822-3d932e4f64b5" podNamespace="kube-system" podName="kube-proxy-hdzbs" Apr 30 01:21:19.374888 kubelet[2977]: I0430 01:21:19.374743 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/201e5452-04d7-4340-a822-3d932e4f64b5-lib-modules\") pod \"kube-proxy-hdzbs\" (UID: \"201e5452-04d7-4340-a822-3d932e4f64b5\") " pod="kube-system/kube-proxy-hdzbs" Apr 30 01:21:19.374888 kubelet[2977]: I0430 01:21:19.374789 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mzm\" (UniqueName: \"kubernetes.io/projected/201e5452-04d7-4340-a822-3d932e4f64b5-kube-api-access-p5mzm\") pod \"kube-proxy-hdzbs\" (UID: \"201e5452-04d7-4340-a822-3d932e4f64b5\") " pod="kube-system/kube-proxy-hdzbs" Apr 30 01:21:19.374888 kubelet[2977]: I0430 01:21:19.374812 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/201e5452-04d7-4340-a822-3d932e4f64b5-kube-proxy\") pod \"kube-proxy-hdzbs\" (UID: \"201e5452-04d7-4340-a822-3d932e4f64b5\") " pod="kube-system/kube-proxy-hdzbs" Apr 30 01:21:19.374888 kubelet[2977]: I0430 01:21:19.374830 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/201e5452-04d7-4340-a822-3d932e4f64b5-xtables-lock\") pod \"kube-proxy-hdzbs\" (UID: \"201e5452-04d7-4340-a822-3d932e4f64b5\") " pod="kube-system/kube-proxy-hdzbs" Apr 30 01:21:19.620781 kubelet[2977]: I0430 01:21:19.620638 2977 topology_manager.go:215] "Topology Admit Handler" podUID="bf3e5f97-90fe-4f42-aadf-674debb87694" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-4pwdr" Apr 30 01:21:19.652427 containerd[1596]: time="2025-04-30T01:21:19.652171683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdzbs,Uid:201e5452-04d7-4340-a822-3d932e4f64b5,Namespace:kube-system,Attempt:0,}" Apr 30 01:21:19.677427 kubelet[2977]: I0430 01:21:19.677374 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbth\" (UniqueName: \"kubernetes.io/projected/bf3e5f97-90fe-4f42-aadf-674debb87694-kube-api-access-dzbth\") pod \"tigera-operator-797db67f8-4pwdr\" (UID: \"bf3e5f97-90fe-4f42-aadf-674debb87694\") " pod="tigera-operator/tigera-operator-797db67f8-4pwdr" Apr 30 01:21:19.677835 kubelet[2977]: I0430 01:21:19.677683 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf3e5f97-90fe-4f42-aadf-674debb87694-var-lib-calico\") pod \"tigera-operator-797db67f8-4pwdr\" (UID: \"bf3e5f97-90fe-4f42-aadf-674debb87694\") " pod="tigera-operator/tigera-operator-797db67f8-4pwdr" Apr 30 01:21:19.682648 containerd[1596]: time="2025-04-30T01:21:19.682105519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:19.683404 containerd[1596]: time="2025-04-30T01:21:19.682664175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:19.683404 containerd[1596]: time="2025-04-30T01:21:19.682688176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:19.683404 containerd[1596]: time="2025-04-30T01:21:19.682795699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:19.720103 containerd[1596]: time="2025-04-30T01:21:19.720064180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdzbs,Uid:201e5452-04d7-4340-a822-3d932e4f64b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"70492d0f6992d368c4c5cae6d9a8a603322f0f52930df759d796f5a8c92e17bd\"" Apr 30 01:21:19.724855 containerd[1596]: time="2025-04-30T01:21:19.724811033Z" level=info msg="CreateContainer within sandbox \"70492d0f6992d368c4c5cae6d9a8a603322f0f52930df759d796f5a8c92e17bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 01:21:19.742929 containerd[1596]: time="2025-04-30T01:21:19.742795575Z" level=info msg="CreateContainer within sandbox \"70492d0f6992d368c4c5cae6d9a8a603322f0f52930df759d796f5a8c92e17bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6acb3c976f0e01e969cf35a9261f62b71fd7ab49ec92c546dade42915eaefd33\"" Apr 30 01:21:19.744771 containerd[1596]: time="2025-04-30T01:21:19.743618278Z" level=info msg="StartContainer for \"6acb3c976f0e01e969cf35a9261f62b71fd7ab49ec92c546dade42915eaefd33\"" Apr 30 01:21:19.806420 containerd[1596]: time="2025-04-30T01:21:19.806285109Z" level=info msg="StartContainer for \"6acb3c976f0e01e969cf35a9261f62b71fd7ab49ec92c546dade42915eaefd33\" returns successfully" Apr 30 01:21:19.926397 containerd[1596]: time="2025-04-30T01:21:19.926288623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-4pwdr,Uid:bf3e5f97-90fe-4f42-aadf-674debb87694,Namespace:tigera-operator,Attempt:0,}" Apr 30 01:21:19.954110 containerd[1596]: time="2025-04-30T01:21:19.953586305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:19.954110 containerd[1596]: time="2025-04-30T01:21:19.953654707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:19.954110 containerd[1596]: time="2025-04-30T01:21:19.953669868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:19.954110 containerd[1596]: time="2025-04-30T01:21:19.953780071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:20.012700 containerd[1596]: time="2025-04-30T01:21:20.012534712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-4pwdr,Uid:bf3e5f97-90fe-4f42-aadf-674debb87694,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aafa1b936a773294918eee6366a0e47c956acc47b348f8bf7521f9440924a133\"" Apr 30 01:21:20.016366 containerd[1596]: time="2025-04-30T01:21:20.016323257Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 01:21:20.266514 kubelet[2977]: I0430 01:21:20.265866 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hdzbs" podStartSLOduration=1.26584125 podStartE2EDuration="1.26584125s" podCreationTimestamp="2025-04-30 01:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:20.265228352 +0000 UTC m=+16.208920787" watchObservedRunningTime="2025-04-30 01:21:20.26584125 +0000 UTC m=+16.209533685" Apr 30 01:21:20.495636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231808982.mount: Deactivated successfully. Apr 30 01:21:22.058400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332220835.mount: Deactivated successfully. Apr 30 01:21:24.091446 containerd[1596]: time="2025-04-30T01:21:24.091396595Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:24.093990 containerd[1596]: time="2025-04-30T01:21:24.093883064Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 01:21:24.096052 containerd[1596]: time="2025-04-30T01:21:24.095301263Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:24.100988 containerd[1596]: time="2025-04-30T01:21:24.100927938Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:24.102107 containerd[1596]: time="2025-04-30T01:21:24.102053569Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 4.085471945s" Apr 30 01:21:24.102283 containerd[1596]: time="2025-04-30T01:21:24.102101411Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 01:21:24.108825 containerd[1596]: time="2025-04-30T01:21:24.108768834Z" level=info msg="CreateContainer within sandbox \"aafa1b936a773294918eee6366a0e47c956acc47b348f8bf7521f9440924a133\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 01:21:24.129279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030321559.mount: Deactivated successfully. Apr 30 01:21:24.138690 containerd[1596]: time="2025-04-30T01:21:24.138477813Z" level=info msg="CreateContainer within sandbox \"aafa1b936a773294918eee6366a0e47c956acc47b348f8bf7521f9440924a133\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4ff1553d0be0c0ab4595f9a3576ce1ce1ab40a5ec1f1917d3903ce1148e389d9\"" Apr 30 01:21:24.140745 containerd[1596]: time="2025-04-30T01:21:24.139519882Z" level=info msg="StartContainer for \"4ff1553d0be0c0ab4595f9a3576ce1ce1ab40a5ec1f1917d3903ce1148e389d9\"" Apr 30 01:21:24.202676 containerd[1596]: time="2025-04-30T01:21:24.202594061Z" level=info msg="StartContainer for \"4ff1553d0be0c0ab4595f9a3576ce1ce1ab40a5ec1f1917d3903ce1148e389d9\" returns successfully" Apr 30 01:21:27.988550 kubelet[2977]: I0430 01:21:27.985623 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-4pwdr" podStartSLOduration=4.894596042 podStartE2EDuration="8.98560398s" podCreationTimestamp="2025-04-30 01:21:19 +0000 UTC" firstStartedPulling="2025-04-30 01:21:20.01428244 +0000 UTC m=+15.957974875" lastFinishedPulling="2025-04-30 01:21:24.105290418 +0000 UTC m=+20.048982813" observedRunningTime="2025-04-30 01:21:24.278715239 +0000 UTC m=+20.222407674" watchObservedRunningTime="2025-04-30 01:21:27.98560398 +0000 UTC m=+23.929296415" Apr 30 01:21:27.988550 kubelet[2977]: I0430 01:21:27.985765 2977 topology_manager.go:215] "Topology Admit Handler" podUID="653e45b6-cb0b-48f6-876a-ab616b35eef8" podNamespace="calico-system" podName="calico-typha-587c4b5ff4-w7f4s" Apr 30 01:21:28.038130 kubelet[2977]: I0430 01:21:28.037941 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/653e45b6-cb0b-48f6-876a-ab616b35eef8-typha-certs\") pod \"calico-typha-587c4b5ff4-w7f4s\" (UID: \"653e45b6-cb0b-48f6-876a-ab616b35eef8\") " pod="calico-system/calico-typha-587c4b5ff4-w7f4s" Apr 30 01:21:28.038765 kubelet[2977]: I0430 01:21:28.038568 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xm2\" (UniqueName: \"kubernetes.io/projected/653e45b6-cb0b-48f6-876a-ab616b35eef8-kube-api-access-79xm2\") pod \"calico-typha-587c4b5ff4-w7f4s\" (UID: \"653e45b6-cb0b-48f6-876a-ab616b35eef8\") " pod="calico-system/calico-typha-587c4b5ff4-w7f4s" Apr 30 01:21:28.039164 kubelet[2977]: I0430 01:21:28.039134 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653e45b6-cb0b-48f6-876a-ab616b35eef8-tigera-ca-bundle\") pod \"calico-typha-587c4b5ff4-w7f4s\" (UID: \"653e45b6-cb0b-48f6-876a-ab616b35eef8\") " pod="calico-system/calico-typha-587c4b5ff4-w7f4s" Apr 30 01:21:28.108541 kubelet[2977]: I0430 01:21:28.107753 2977 topology_manager.go:215] "Topology Admit Handler" podUID="7c75bc90-0608-4911-85f7-776952142f88" podNamespace="calico-system" podName="calico-node-4w8qb" Apr 30 01:21:28.141306 kubelet[2977]: I0430 01:21:28.141263 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-lib-modules\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143109 kubelet[2977]: I0430 01:21:28.141500 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-cni-log-dir\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143109 kubelet[2977]: I0430 01:21:28.141531 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c75bc90-0608-4911-85f7-776952142f88-tigera-ca-bundle\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143109 kubelet[2977]: I0430 01:21:28.141560 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-cni-bin-dir\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143109 kubelet[2977]: I0430 01:21:28.141585 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-xtables-lock\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143109 kubelet[2977]: I0430 01:21:28.141601 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-policysync\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143350 kubelet[2977]: I0430 01:21:28.141615 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-var-run-calico\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143350 kubelet[2977]: I0430 01:21:28.141631 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-cni-net-dir\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143350 kubelet[2977]: I0430 01:21:28.141657 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7c75bc90-0608-4911-85f7-776952142f88-node-certs\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143350 kubelet[2977]: I0430 01:21:28.141686 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-flexvol-driver-host\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143350 kubelet[2977]: I0430 01:21:28.141705 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c75bc90-0608-4911-85f7-776952142f88-var-lib-calico\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.143466 kubelet[2977]: I0430 01:21:28.141721 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmfhm\" (UniqueName: \"kubernetes.io/projected/7c75bc90-0608-4911-85f7-776952142f88-kube-api-access-kmfhm\") pod \"calico-node-4w8qb\" (UID: \"7c75bc90-0608-4911-85f7-776952142f88\") " pod="calico-system/calico-node-4w8qb" Apr 30 01:21:28.251252 kubelet[2977]: I0430 01:21:28.251121 2977 topology_manager.go:215] "Topology Admit Handler" podUID="9fa4d4e7-7c7d-4b64-b56b-683ffe29c791" podNamespace="calico-system" podName="csi-node-driver-rqtjj" Apr 30 01:21:28.253265 kubelet[2977]: E0430 01:21:28.252753 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqtjj" podUID="9fa4d4e7-7c7d-4b64-b56b-683ffe29c791" Apr 30 01:21:28.266473 kubelet[2977]: E0430 01:21:28.266386 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.266473 kubelet[2977]: W0430 01:21:28.266415 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.266473 kubelet[2977]: E0430 01:21:28.266435 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.285475 kubelet[2977]: E0430 01:21:28.285447 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.286885 kubelet[2977]: W0430 01:21:28.286734 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.286885 kubelet[2977]: E0430 01:21:28.286844 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.320972 containerd[1596]: time="2025-04-30T01:21:28.319939789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-587c4b5ff4-w7f4s,Uid:653e45b6-cb0b-48f6-876a-ab616b35eef8,Namespace:calico-system,Attempt:0,}" Apr 30 01:21:28.325838 kubelet[2977]: E0430 01:21:28.325273 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.325838 kubelet[2977]: W0430 01:21:28.325299 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.325838 kubelet[2977]: E0430 01:21:28.325519 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.326877 kubelet[2977]: E0430 01:21:28.326704 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.326877 kubelet[2977]: W0430 01:21:28.326723 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.326877 kubelet[2977]: E0430 01:21:28.326763 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.327995 kubelet[2977]: E0430 01:21:28.327975 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.328503 kubelet[2977]: W0430 01:21:28.328383 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.328503 kubelet[2977]: E0430 01:21:28.328417 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.328951 kubelet[2977]: E0430 01:21:28.328935 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.329225 kubelet[2977]: W0430 01:21:28.329161 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.329225 kubelet[2977]: E0430 01:21:28.329190 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.329977 kubelet[2977]: E0430 01:21:28.329909 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.329977 kubelet[2977]: W0430 01:21:28.329925 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.329977 kubelet[2977]: E0430 01:21:28.329942 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.330481 kubelet[2977]: E0430 01:21:28.330383 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.330481 kubelet[2977]: W0430 01:21:28.330412 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.330481 kubelet[2977]: E0430 01:21:28.330440 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.330785 kubelet[2977]: E0430 01:21:28.330735 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.330785 kubelet[2977]: W0430 01:21:28.330746 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.330785 kubelet[2977]: E0430 01:21:28.330757 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.331282 kubelet[2977]: E0430 01:21:28.331198 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.331282 kubelet[2977]: W0430 01:21:28.331209 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.331282 kubelet[2977]: E0430 01:21:28.331222 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.331827 kubelet[2977]: E0430 01:21:28.331477 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.331827 kubelet[2977]: W0430 01:21:28.331486 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.331827 kubelet[2977]: E0430 01:21:28.331496 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.331827 kubelet[2977]: E0430 01:21:28.331643 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.331827 kubelet[2977]: W0430 01:21:28.331651 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.331827 kubelet[2977]: E0430 01:21:28.331659 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.332371 kubelet[2977]: E0430 01:21:28.332194 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.332371 kubelet[2977]: W0430 01:21:28.332205 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.332371 kubelet[2977]: E0430 01:21:28.332242 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.333379 kubelet[2977]: E0430 01:21:28.332770 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.333379 kubelet[2977]: W0430 01:21:28.332782 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.333379 kubelet[2977]: E0430 01:21:28.332794 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.333710 kubelet[2977]: E0430 01:21:28.333612 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.333710 kubelet[2977]: W0430 01:21:28.333625 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.333710 kubelet[2977]: E0430 01:21:28.333648 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.334592 kubelet[2977]: E0430 01:21:28.334368 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.334592 kubelet[2977]: W0430 01:21:28.334390 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.334814 kubelet[2977]: E0430 01:21:28.334407 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.335517 kubelet[2977]: E0430 01:21:28.335334 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.335517 kubelet[2977]: W0430 01:21:28.335431 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.335517 kubelet[2977]: E0430 01:21:28.335446 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.341476 kubelet[2977]: E0430 01:21:28.341142 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.341476 kubelet[2977]: W0430 01:21:28.341163 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.341476 kubelet[2977]: E0430 01:21:28.341187 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.341707 kubelet[2977]: E0430 01:21:28.341640 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.341707 kubelet[2977]: W0430 01:21:28.341654 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.341707 kubelet[2977]: E0430 01:21:28.341666 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.342599 kubelet[2977]: E0430 01:21:28.341965 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.342599 kubelet[2977]: W0430 01:21:28.341976 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.342599 kubelet[2977]: E0430 01:21:28.341986 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.344956 kubelet[2977]: E0430 01:21:28.343577 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.344956 kubelet[2977]: W0430 01:21:28.343592 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.344956 kubelet[2977]: E0430 01:21:28.343619 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.344956 kubelet[2977]: E0430 01:21:28.344835 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.344956 kubelet[2977]: W0430 01:21:28.344849 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.344956 kubelet[2977]: E0430 01:21:28.344863 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.349720 kubelet[2977]: E0430 01:21:28.349439 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.349720 kubelet[2977]: W0430 01:21:28.349455 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.349720 kubelet[2977]: E0430 01:21:28.349469 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.349720 kubelet[2977]: I0430 01:21:28.349508 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9fa4d4e7-7c7d-4b64-b56b-683ffe29c791-varrun\") pod \"csi-node-driver-rqtjj\" (UID: \"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791\") " pod="calico-system/csi-node-driver-rqtjj" Apr 30 01:21:28.350170 kubelet[2977]: E0430 01:21:28.349937 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.350170 kubelet[2977]: W0430 01:21:28.349951 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.350170 kubelet[2977]: E0430 01:21:28.349984 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.350439 kubelet[2977]: I0430 01:21:28.350008 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fa4d4e7-7c7d-4b64-b56b-683ffe29c791-kubelet-dir\") pod \"csi-node-driver-rqtjj\" (UID: \"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791\") " pod="calico-system/csi-node-driver-rqtjj" Apr 30 01:21:28.350613 kubelet[2977]: E0430 01:21:28.350573 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.350613 kubelet[2977]: W0430 01:21:28.350585 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.350774 kubelet[2977]: E0430 01:21:28.350599 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.351027 kubelet[2977]: E0430 01:21:28.350957 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.351027 kubelet[2977]: W0430 01:21:28.350968 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.351027 kubelet[2977]: E0430 01:21:28.350987 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.352087 kubelet[2977]: E0430 01:21:28.351421 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.352087 kubelet[2977]: W0430 01:21:28.351441 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.352087 kubelet[2977]: E0430 01:21:28.351456 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.352087 kubelet[2977]: I0430 01:21:28.351477 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9fa4d4e7-7c7d-4b64-b56b-683ffe29c791-socket-dir\") pod \"csi-node-driver-rqtjj\" (UID: \"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791\") " pod="calico-system/csi-node-driver-rqtjj" Apr 30 01:21:28.352506 kubelet[2977]: E0430 01:21:28.352312 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.352506 kubelet[2977]: W0430 01:21:28.352340 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.352506 kubelet[2977]: E0430 01:21:28.352435 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.352506 kubelet[2977]: I0430 01:21:28.352460 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg6g5\" (UniqueName: \"kubernetes.io/projected/9fa4d4e7-7c7d-4b64-b56b-683ffe29c791-kube-api-access-fg6g5\") pod \"csi-node-driver-rqtjj\" (UID: \"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791\") " pod="calico-system/csi-node-driver-rqtjj" Apr 30 01:21:28.353049 kubelet[2977]: E0430 01:21:28.352940 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.353049 kubelet[2977]: W0430 01:21:28.352953 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.353424 kubelet[2977]: E0430 01:21:28.353121 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.353722 kubelet[2977]: E0430 01:21:28.353658 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.353722 kubelet[2977]: W0430 01:21:28.353670 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.354049 kubelet[2977]: E0430 01:21:28.353828 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.354271 kubelet[2977]: E0430 01:21:28.354248 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.354431 kubelet[2977]: W0430 01:21:28.354336 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.354431 kubelet[2977]: E0430 01:21:28.354371 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.354431 kubelet[2977]: I0430 01:21:28.354396 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9fa4d4e7-7c7d-4b64-b56b-683ffe29c791-registration-dir\") pod \"csi-node-driver-rqtjj\" (UID: \"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791\") " pod="calico-system/csi-node-driver-rqtjj" Apr 30 01:21:28.355209 kubelet[2977]: E0430 01:21:28.354957 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.355209 kubelet[2977]: W0430 01:21:28.354971 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.355432 kubelet[2977]: E0430 01:21:28.355316 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.355673 kubelet[2977]: E0430 01:21:28.355528 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.355673 kubelet[2977]: W0430 01:21:28.355538 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.355977 kubelet[2977]: E0430 01:21:28.355548 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.356551 kubelet[2977]: E0430 01:21:28.356343 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.356551 kubelet[2977]: W0430 01:21:28.356366 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.356551 kubelet[2977]: E0430 01:21:28.356495 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.357679 kubelet[2977]: E0430 01:21:28.357271 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.357679 kubelet[2977]: W0430 01:21:28.357284 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.357679 kubelet[2977]: E0430 01:21:28.357296 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.358528 kubelet[2977]: E0430 01:21:28.358447 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.358528 kubelet[2977]: W0430 01:21:28.358463 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.358528 kubelet[2977]: E0430 01:21:28.358477 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.359497 kubelet[2977]: E0430 01:21:28.359342 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.359497 kubelet[2977]: W0430 01:21:28.359366 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.359497 kubelet[2977]: E0430 01:21:28.359385 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.373877 containerd[1596]: time="2025-04-30T01:21:28.373135842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:28.373877 containerd[1596]: time="2025-04-30T01:21:28.373211524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:28.373877 containerd[1596]: time="2025-04-30T01:21:28.373290886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:28.373877 containerd[1596]: time="2025-04-30T01:21:28.373397289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:28.423029 containerd[1596]: time="2025-04-30T01:21:28.422457229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4w8qb,Uid:7c75bc90-0608-4911-85f7-776952142f88,Namespace:calico-system,Attempt:0,}" Apr 30 01:21:28.461263 kubelet[2977]: E0430 01:21:28.461231 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.462169 kubelet[2977]: W0430 01:21:28.462120 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.462354 kubelet[2977]: E0430 01:21:28.462341 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.463275 kubelet[2977]: E0430 01:21:28.463213 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.463765 kubelet[2977]: W0430 01:21:28.463436 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.463765 kubelet[2977]: E0430 01:21:28.463585 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.465607 kubelet[2977]: E0430 01:21:28.464997 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.465607 kubelet[2977]: W0430 01:21:28.465294 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.465607 kubelet[2977]: E0430 01:21:28.465340 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.466992 kubelet[2977]: E0430 01:21:28.466687 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.466992 kubelet[2977]: W0430 01:21:28.466704 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.467865 kubelet[2977]: E0430 01:21:28.467489 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.468688 kubelet[2977]: E0430 01:21:28.468231 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.468688 kubelet[2977]: W0430 01:21:28.468251 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.469498 kubelet[2977]: E0430 01:21:28.469269 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.471186 kubelet[2977]: E0430 01:21:28.470416 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.471186 kubelet[2977]: W0430 01:21:28.470435 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.472025 kubelet[2977]: E0430 01:21:28.471499 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.472025 kubelet[2977]: E0430 01:21:28.471946 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.472025 kubelet[2977]: W0430 01:21:28.471960 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.472793 kubelet[2977]: E0430 01:21:28.472627 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.473659 containerd[1596]: time="2025-04-30T01:21:28.473419820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-587c4b5ff4-w7f4s,Uid:653e45b6-cb0b-48f6-876a-ab616b35eef8,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc2bd24f4b7c667fee0221e44d6a3b1ad745ee825376e267335ea58420b29f4c\"" Apr 30 01:21:28.474210 kubelet[2977]: E0430 01:21:28.473939 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.474210 kubelet[2977]: W0430 01:21:28.473951 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.474784 kubelet[2977]: E0430 01:21:28.474530 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.474784 kubelet[2977]: W0430 01:21:28.474545 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.476128 kubelet[2977]: E0430 01:21:28.475519 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.477211 kubelet[2977]: E0430 01:21:28.476809 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.477211 kubelet[2977]: W0430 01:21:28.476827 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.477211 kubelet[2977]: E0430 01:21:28.477155 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.477211 kubelet[2977]: E0430 01:21:28.477179 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.478270 kubelet[2977]: E0430 01:21:28.477434 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.478270 kubelet[2977]: W0430 01:21:28.478005 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.478860 kubelet[2977]: E0430 01:21:28.478461 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.478860 kubelet[2977]: W0430 01:21:28.478659 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.480117 kubelet[2977]: E0430 01:21:28.479651 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.480117 kubelet[2977]: W0430 01:21:28.479667 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.480462 kubelet[2977]: E0430 01:21:28.480446 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.480723 kubelet[2977]: W0430 01:21:28.480549 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.482561 kubelet[2977]: E0430 01:21:28.481956 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.482561 kubelet[2977]: W0430 01:21:28.481970 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.482561 kubelet[2977]: E0430 01:21:28.481986 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.482561 kubelet[2977]: E0430 01:21:28.482464 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.482561 kubelet[2977]: E0430 01:21:28.482470 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.482561 kubelet[2977]: W0430 01:21:28.482475 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.482561 kubelet[2977]: E0430 01:21:28.482494 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.482561 kubelet[2977]: E0430 01:21:28.482501 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.482561 kubelet[2977]: E0430 01:21:28.482529 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.482561 kubelet[2977]: E0430 01:21:28.482485 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.485087 kubelet[2977]: E0430 01:21:28.484173 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.485087 kubelet[2977]: W0430 01:21:28.484199 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.485087 kubelet[2977]: E0430 01:21:28.484223 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.485217 containerd[1596]: time="2025-04-30T01:21:28.484739009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 01:21:28.486441 kubelet[2977]: E0430 01:21:28.486421 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.486756 kubelet[2977]: W0430 01:21:28.486740 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.487532 kubelet[2977]: E0430 01:21:28.487511 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.487678 kubelet[2977]: W0430 01:21:28.487660 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.488422 kubelet[2977]: E0430 01:21:28.488404 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.488772 kubelet[2977]: W0430 01:21:28.488572 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.491313 kubelet[2977]: E0430 01:21:28.491082 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.491313 kubelet[2977]: E0430 01:21:28.491192 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.491313 kubelet[2977]: W0430 01:21:28.491213 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.491313 kubelet[2977]: E0430 01:21:28.491229 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.491313 kubelet[2977]: E0430 01:21:28.491250 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.491313 kubelet[2977]: E0430 01:21:28.491274 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.492480 kubelet[2977]: E0430 01:21:28.492223 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.492480 kubelet[2977]: W0430 01:21:28.492350 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.492480 kubelet[2977]: E0430 01:21:28.492377 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.493550 containerd[1596]: time="2025-04-30T01:21:28.493213680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:28.493550 containerd[1596]: time="2025-04-30T01:21:28.493279762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:28.493550 containerd[1596]: time="2025-04-30T01:21:28.493296483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:28.493984 kubelet[2977]: E0430 01:21:28.493472 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.493984 kubelet[2977]: W0430 01:21:28.493487 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.493984 kubelet[2977]: E0430 01:21:28.493513 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.493984 kubelet[2977]: E0430 01:21:28.493969 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.493984 kubelet[2977]: W0430 01:21:28.493985 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.494257 kubelet[2977]: E0430 01:21:28.494026 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.494486 kubelet[2977]: E0430 01:21:28.494360 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.494486 kubelet[2977]: W0430 01:21:28.494387 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.494486 kubelet[2977]: E0430 01:21:28.494401 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.496311 containerd[1596]: time="2025-04-30T01:21:28.496230923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:28.515522 kubelet[2977]: E0430 01:21:28.514901 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:28.515522 kubelet[2977]: W0430 01:21:28.515089 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:28.515522 kubelet[2977]: E0430 01:21:28.515119 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:28.558347 containerd[1596]: time="2025-04-30T01:21:28.558301538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4w8qb,Uid:7c75bc90-0608-4911-85f7-776952142f88,Namespace:calico-system,Attempt:0,} returns sandbox id \"259deebea586cd829b3bb76455a93981d6d4a0a7debb212a614b92dd06f2306f\"" Apr 30 01:21:30.174243 kubelet[2977]: E0430 01:21:30.174203 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqtjj" podUID="9fa4d4e7-7c7d-4b64-b56b-683ffe29c791" Apr 30 01:21:30.273253 containerd[1596]: time="2025-04-30T01:21:30.273204747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:30.274347 containerd[1596]: time="2025-04-30T01:21:30.274180894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 01:21:30.275444 containerd[1596]: time="2025-04-30T01:21:30.275217522Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:30.277587 containerd[1596]: time="2025-04-30T01:21:30.277537105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:30.278424 containerd[1596]: time="2025-04-30T01:21:30.278383608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.793594357s" Apr 30 01:21:30.278624 containerd[1596]: time="2025-04-30T01:21:30.278527252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 01:21:30.279736 containerd[1596]: time="2025-04-30T01:21:30.279699124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 01:21:30.303255 containerd[1596]: time="2025-04-30T01:21:30.302114453Z" level=info msg="CreateContainer within sandbox \"fc2bd24f4b7c667fee0221e44d6a3b1ad745ee825376e267335ea58420b29f4c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 01:21:30.323302 containerd[1596]: time="2025-04-30T01:21:30.323240707Z" level=info msg="CreateContainer within sandbox \"fc2bd24f4b7c667fee0221e44d6a3b1ad745ee825376e267335ea58420b29f4c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"40005b5883127d7f88722b37faef2feb32deecbc8e1978b0da4e1020acece2c8\"" Apr 30 01:21:30.324743 containerd[1596]: time="2025-04-30T01:21:30.323995168Z" level=info msg="StartContainer for \"40005b5883127d7f88722b37faef2feb32deecbc8e1978b0da4e1020acece2c8\"" Apr 30 01:21:30.400886 containerd[1596]: time="2025-04-30T01:21:30.399416778Z" level=info msg="StartContainer for \"40005b5883127d7f88722b37faef2feb32deecbc8e1978b0da4e1020acece2c8\" returns successfully" Apr 30 01:21:31.332386 kubelet[2977]: I0430 01:21:31.332320 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-587c4b5ff4-w7f4s" podStartSLOduration=2.532640036 podStartE2EDuration="4.332299999s" podCreationTimestamp="2025-04-30 01:21:27 +0000 UTC" firstStartedPulling="2025-04-30 01:21:28.479902997 +0000 UTC m=+24.423595432" lastFinishedPulling="2025-04-30 01:21:30.27956296 +0000 UTC m=+26.223255395" observedRunningTime="2025-04-30 01:21:31.30800586 +0000 UTC m=+27.251698295" watchObservedRunningTime="2025-04-30 01:21:31.332299999 +0000 UTC m=+27.275992434" Apr 30 01:21:31.367611 kubelet[2977]: E0430 01:21:31.367574 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.367611 kubelet[2977]: W0430 01:21:31.367600 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.367611 kubelet[2977]: E0430 01:21:31.367622 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.367971 kubelet[2977]: E0430 01:21:31.367959 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.367971 kubelet[2977]: W0430 01:21:31.367972 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.368188 kubelet[2977]: E0430 01:21:31.367983 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.368332 kubelet[2977]: E0430 01:21:31.368315 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.368369 kubelet[2977]: W0430 01:21:31.368335 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.368369 kubelet[2977]: E0430 01:21:31.368350 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.368580 kubelet[2977]: E0430 01:21:31.368568 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.368580 kubelet[2977]: W0430 01:21:31.368580 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.368839 kubelet[2977]: E0430 01:21:31.368590 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.369037 kubelet[2977]: E0430 01:21:31.368968 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.369037 kubelet[2977]: W0430 01:21:31.368985 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.369037 kubelet[2977]: E0430 01:21:31.368999 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.369717 kubelet[2977]: E0430 01:21:31.369643 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.369717 kubelet[2977]: W0430 01:21:31.369670 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.369717 kubelet[2977]: E0430 01:21:31.369688 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.370642 kubelet[2977]: E0430 01:21:31.370440 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.370642 kubelet[2977]: W0430 01:21:31.370640 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.370868 kubelet[2977]: E0430 01:21:31.370656 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.371283 kubelet[2977]: E0430 01:21:31.371265 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.371283 kubelet[2977]: W0430 01:21:31.371278 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.371283 kubelet[2977]: E0430 01:21:31.371291 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.371777 kubelet[2977]: E0430 01:21:31.371759 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.371777 kubelet[2977]: W0430 01:21:31.371772 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.371777 kubelet[2977]: E0430 01:21:31.371782 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.372309 kubelet[2977]: E0430 01:21:31.372286 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.372309 kubelet[2977]: W0430 01:21:31.372298 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.372309 kubelet[2977]: E0430 01:21:31.372308 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.372754 kubelet[2977]: E0430 01:21:31.372740 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.372754 kubelet[2977]: W0430 01:21:31.372754 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.372817 kubelet[2977]: E0430 01:21:31.372765 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.372963 kubelet[2977]: E0430 01:21:31.372953 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.372963 kubelet[2977]: W0430 01:21:31.372962 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.373055 kubelet[2977]: E0430 01:21:31.372970 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.373217 kubelet[2977]: E0430 01:21:31.373205 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.373217 kubelet[2977]: W0430 01:21:31.373217 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.373368 kubelet[2977]: E0430 01:21:31.373225 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.373524 kubelet[2977]: E0430 01:21:31.373433 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.373524 kubelet[2977]: W0430 01:21:31.373442 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.373524 kubelet[2977]: E0430 01:21:31.373450 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.373756 kubelet[2977]: E0430 01:21:31.373747 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.373756 kubelet[2977]: W0430 01:21:31.373755 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.373816 kubelet[2977]: E0430 01:21:31.373762 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.398830 kubelet[2977]: E0430 01:21:31.398660 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.398830 kubelet[2977]: W0430 01:21:31.398682 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.398830 kubelet[2977]: E0430 01:21:31.398701 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.399247 kubelet[2977]: E0430 01:21:31.399227 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.399497 kubelet[2977]: W0430 01:21:31.399338 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.399497 kubelet[2977]: E0430 01:21:31.399368 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.399655 kubelet[2977]: E0430 01:21:31.399643 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.399719 kubelet[2977]: W0430 01:21:31.399708 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.399917 kubelet[2977]: E0430 01:21:31.399781 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.400096 kubelet[2977]: E0430 01:21:31.400083 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.400181 kubelet[2977]: W0430 01:21:31.400169 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.400270 kubelet[2977]: E0430 01:21:31.400244 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.400526 kubelet[2977]: E0430 01:21:31.400512 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.400676 kubelet[2977]: W0430 01:21:31.400582 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.400676 kubelet[2977]: E0430 01:21:31.400607 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.400821 kubelet[2977]: E0430 01:21:31.400809 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.400872 kubelet[2977]: W0430 01:21:31.400862 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.400939 kubelet[2977]: E0430 01:21:31.400922 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.401379 kubelet[2977]: E0430 01:21:31.401271 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.401379 kubelet[2977]: W0430 01:21:31.401284 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.401379 kubelet[2977]: E0430 01:21:31.401303 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.401792 kubelet[2977]: E0430 01:21:31.401645 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.401792 kubelet[2977]: W0430 01:21:31.401657 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.401792 kubelet[2977]: E0430 01:21:31.401673 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.401961 kubelet[2977]: E0430 01:21:31.401950 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.402042 kubelet[2977]: W0430 01:21:31.402007 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.402167 kubelet[2977]: E0430 01:21:31.402137 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.402412 kubelet[2977]: E0430 01:21:31.402370 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.402634 kubelet[2977]: W0430 01:21:31.402540 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.402634 kubelet[2977]: E0430 01:21:31.402576 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.402850 kubelet[2977]: E0430 01:21:31.402754 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.402850 kubelet[2977]: W0430 01:21:31.402764 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.402850 kubelet[2977]: E0430 01:21:31.402781 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.403002 kubelet[2977]: E0430 01:21:31.402989 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.403102 kubelet[2977]: W0430 01:21:31.403090 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.403180 kubelet[2977]: E0430 01:21:31.403168 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.403428 kubelet[2977]: E0430 01:21:31.403410 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.403428 kubelet[2977]: W0430 01:21:31.403427 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.403616 kubelet[2977]: E0430 01:21:31.403448 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.403709 kubelet[2977]: E0430 01:21:31.403698 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.403744 kubelet[2977]: W0430 01:21:31.403710 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.403744 kubelet[2977]: E0430 01:21:31.403725 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.404247 kubelet[2977]: E0430 01:21:31.404227 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.404247 kubelet[2977]: W0430 01:21:31.404244 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.404410 kubelet[2977]: E0430 01:21:31.404349 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.404410 kubelet[2977]: E0430 01:21:31.404410 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.404527 kubelet[2977]: W0430 01:21:31.404420 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.404527 kubelet[2977]: E0430 01:21:31.404432 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.404747 kubelet[2977]: E0430 01:21:31.404735 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.404747 kubelet[2977]: W0430 01:21:31.404747 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.404838 kubelet[2977]: E0430 01:21:31.404759 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.405235 kubelet[2977]: E0430 01:21:31.405217 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 01:21:31.405235 kubelet[2977]: W0430 01:21:31.405234 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 01:21:31.405329 kubelet[2977]: E0430 01:21:31.405251 2977 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 01:21:31.759302 containerd[1596]: time="2025-04-30T01:21:31.757383650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:31.760400 containerd[1596]: time="2025-04-30T01:21:31.759771355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 01:21:31.760400 containerd[1596]: time="2025-04-30T01:21:31.759891478Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:31.765425 containerd[1596]: time="2025-04-30T01:21:31.765326546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:31.767824 containerd[1596]: time="2025-04-30T01:21:31.767768492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.488029688s" Apr 30 01:21:31.767824 containerd[1596]: time="2025-04-30T01:21:31.767830574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 01:21:31.776002 containerd[1596]: time="2025-04-30T01:21:31.775874792Z" level=info msg="CreateContainer within sandbox \"259deebea586cd829b3bb76455a93981d6d4a0a7debb212a614b92dd06f2306f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 01:21:31.807427 containerd[1596]: time="2025-04-30T01:21:31.807338885Z" level=info msg="CreateContainer within sandbox \"259deebea586cd829b3bb76455a93981d6d4a0a7debb212a614b92dd06f2306f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"515f1cbcc86deb35cd444ecf370b8eb8427bd1b3d6d001c4458e500297377c85\"" Apr 30 01:21:31.809708 containerd[1596]: time="2025-04-30T01:21:31.809648588Z" level=info msg="StartContainer for \"515f1cbcc86deb35cd444ecf370b8eb8427bd1b3d6d001c4458e500297377c85\"" Apr 30 01:21:31.889422 containerd[1596]: time="2025-04-30T01:21:31.889379031Z" level=info msg="StartContainer for \"515f1cbcc86deb35cd444ecf370b8eb8427bd1b3d6d001c4458e500297377c85\" returns successfully" Apr 30 01:21:31.942554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-515f1cbcc86deb35cd444ecf370b8eb8427bd1b3d6d001c4458e500297377c85-rootfs.mount: Deactivated successfully. Apr 30 01:21:32.038702 containerd[1596]: time="2025-04-30T01:21:32.037979940Z" level=info msg="shim disconnected" id=515f1cbcc86deb35cd444ecf370b8eb8427bd1b3d6d001c4458e500297377c85 namespace=k8s.io Apr 30 01:21:32.038702 containerd[1596]: time="2025-04-30T01:21:32.038123504Z" level=warning msg="cleaning up after shim disconnected" id=515f1cbcc86deb35cd444ecf370b8eb8427bd1b3d6d001c4458e500297377c85 namespace=k8s.io Apr 30 01:21:32.038702 containerd[1596]: time="2025-04-30T01:21:32.038144144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:21:32.172275 kubelet[2977]: E0430 01:21:32.172218 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqtjj" podUID="9fa4d4e7-7c7d-4b64-b56b-683ffe29c791" Apr 30 01:21:32.298573 containerd[1596]: time="2025-04-30T01:21:32.298463112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 01:21:34.174132 kubelet[2977]: E0430 01:21:34.173296 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqtjj" podUID="9fa4d4e7-7c7d-4b64-b56b-683ffe29c791" Apr 30 01:21:34.787785 containerd[1596]: time="2025-04-30T01:21:34.787198672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:34.789261 containerd[1596]: time="2025-04-30T01:21:34.789213606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 01:21:34.789690 containerd[1596]: time="2025-04-30T01:21:34.789649058Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:34.796413 containerd[1596]: time="2025-04-30T01:21:34.794262863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:34.796413 containerd[1596]: time="2025-04-30T01:21:34.795850145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.496468569s" Apr 30 01:21:34.796413 containerd[1596]: time="2025-04-30T01:21:34.795903587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 01:21:34.801981 containerd[1596]: time="2025-04-30T01:21:34.801915069Z" level=info msg="CreateContainer within sandbox \"259deebea586cd829b3bb76455a93981d6d4a0a7debb212a614b92dd06f2306f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 01:21:34.828228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031355978.mount: Deactivated successfully. Apr 30 01:21:34.834006 containerd[1596]: time="2025-04-30T01:21:34.833957613Z" level=info msg="CreateContainer within sandbox \"259deebea586cd829b3bb76455a93981d6d4a0a7debb212a614b92dd06f2306f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb2ddb4a92e2194e9126eb922a65e5cddc2f43586a00705843d96061aae8286c\"" Apr 30 01:21:34.835611 containerd[1596]: time="2025-04-30T01:21:34.835225527Z" level=info msg="StartContainer for \"fb2ddb4a92e2194e9126eb922a65e5cddc2f43586a00705843d96061aae8286c\"" Apr 30 01:21:34.905747 containerd[1596]: time="2025-04-30T01:21:34.905701428Z" level=info msg="StartContainer for \"fb2ddb4a92e2194e9126eb922a65e5cddc2f43586a00705843d96061aae8286c\" returns successfully" Apr 30 01:21:35.425963 containerd[1596]: time="2025-04-30T01:21:35.425918955Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 01:21:35.435812 kubelet[2977]: I0430 01:21:35.435464 2977 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 01:21:35.461964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb2ddb4a92e2194e9126eb922a65e5cddc2f43586a00705843d96061aae8286c-rootfs.mount: Deactivated successfully. Apr 30 01:21:35.474951 kubelet[2977]: I0430 01:21:35.474863 2977 topology_manager.go:215] "Topology Admit Handler" podUID="37fed3e1-0881-414f-8dab-a8370cfdc3cb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7t5nj" Apr 30 01:21:35.485969 kubelet[2977]: I0430 01:21:35.484500 2977 topology_manager.go:215] "Topology Admit Handler" podUID="a72ecee6-7912-44aa-bb57-e01abdb75987" podNamespace="calico-apiserver" podName="calico-apiserver-6d4cb765cd-j4ngc" Apr 30 01:21:35.485969 kubelet[2977]: I0430 01:21:35.484677 2977 topology_manager.go:215] "Topology Admit Handler" podUID="07f2d83c-63d4-40ec-8d8b-ab44f45ca400" podNamespace="kube-system" podName="coredns-7db6d8ff4d-stlxr" Apr 30 01:21:35.485969 kubelet[2977]: I0430 01:21:35.484803 2977 topology_manager.go:215] "Topology Admit Handler" podUID="67dae70a-bf89-45d5-a866-8ee292cf8980" podNamespace="calico-system" podName="calico-kube-controllers-68b69776db-wk29h" Apr 30 01:21:35.489141 kubelet[2977]: I0430 01:21:35.489105 2977 topology_manager.go:215] "Topology Admit Handler" podUID="ce510606-0f1b-4bd3-b5b8-10bbf3c549c3" podNamespace="calico-apiserver" podName="calico-apiserver-6d4cb765cd-t2ldw" Apr 30 01:21:35.531169 kubelet[2977]: I0430 01:21:35.531133 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce510606-0f1b-4bd3-b5b8-10bbf3c549c3-calico-apiserver-certs\") pod \"calico-apiserver-6d4cb765cd-t2ldw\" (UID: \"ce510606-0f1b-4bd3-b5b8-10bbf3c549c3\") " pod="calico-apiserver/calico-apiserver-6d4cb765cd-t2ldw" Apr 30 01:21:35.531408 kubelet[2977]: I0430 01:21:35.531388 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67dae70a-bf89-45d5-a866-8ee292cf8980-tigera-ca-bundle\") pod \"calico-kube-controllers-68b69776db-wk29h\" (UID: \"67dae70a-bf89-45d5-a866-8ee292cf8980\") " pod="calico-system/calico-kube-controllers-68b69776db-wk29h" Apr 30 01:21:35.531512 kubelet[2977]: I0430 01:21:35.531496 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whn4s\" (UniqueName: \"kubernetes.io/projected/37fed3e1-0881-414f-8dab-a8370cfdc3cb-kube-api-access-whn4s\") pod \"coredns-7db6d8ff4d-7t5nj\" (UID: \"37fed3e1-0881-414f-8dab-a8370cfdc3cb\") " pod="kube-system/coredns-7db6d8ff4d-7t5nj" Apr 30 01:21:35.531604 kubelet[2977]: I0430 01:21:35.531588 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2zn\" (UniqueName: \"kubernetes.io/projected/a72ecee6-7912-44aa-bb57-e01abdb75987-kube-api-access-mt2zn\") pod \"calico-apiserver-6d4cb765cd-j4ngc\" (UID: \"a72ecee6-7912-44aa-bb57-e01abdb75987\") " pod="calico-apiserver/calico-apiserver-6d4cb765cd-j4ngc" Apr 30 01:21:35.531763 kubelet[2977]: I0430 01:21:35.531723 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbkvz\" (UniqueName: \"kubernetes.io/projected/07f2d83c-63d4-40ec-8d8b-ab44f45ca400-kube-api-access-rbkvz\") pod \"coredns-7db6d8ff4d-stlxr\" (UID: \"07f2d83c-63d4-40ec-8d8b-ab44f45ca400\") " pod="kube-system/coredns-7db6d8ff4d-stlxr" Apr 30 01:21:35.531985 kubelet[2977]: I0430 01:21:35.531963 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8vpg\" (UniqueName: \"kubernetes.io/projected/67dae70a-bf89-45d5-a866-8ee292cf8980-kube-api-access-b8vpg\") pod \"calico-kube-controllers-68b69776db-wk29h\" (UID: \"67dae70a-bf89-45d5-a866-8ee292cf8980\") " pod="calico-system/calico-kube-controllers-68b69776db-wk29h" Apr 30 01:21:35.532236 kubelet[2977]: I0430 01:21:35.532214 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37fed3e1-0881-414f-8dab-a8370cfdc3cb-config-volume\") pod \"coredns-7db6d8ff4d-7t5nj\" (UID: \"37fed3e1-0881-414f-8dab-a8370cfdc3cb\") " pod="kube-system/coredns-7db6d8ff4d-7t5nj" Apr 30 01:21:35.532443 kubelet[2977]: I0430 01:21:35.532426 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khvnt\" (UniqueName: \"kubernetes.io/projected/ce510606-0f1b-4bd3-b5b8-10bbf3c549c3-kube-api-access-khvnt\") pod \"calico-apiserver-6d4cb765cd-t2ldw\" (UID: \"ce510606-0f1b-4bd3-b5b8-10bbf3c549c3\") " pod="calico-apiserver/calico-apiserver-6d4cb765cd-t2ldw" Apr 30 01:21:35.532658 kubelet[2977]: I0430 01:21:35.532593 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a72ecee6-7912-44aa-bb57-e01abdb75987-calico-apiserver-certs\") pod \"calico-apiserver-6d4cb765cd-j4ngc\" (UID: \"a72ecee6-7912-44aa-bb57-e01abdb75987\") " pod="calico-apiserver/calico-apiserver-6d4cb765cd-j4ngc" Apr 30 01:21:35.532658 kubelet[2977]: I0430 01:21:35.532618 2977 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07f2d83c-63d4-40ec-8d8b-ab44f45ca400-config-volume\") pod \"coredns-7db6d8ff4d-stlxr\" (UID: \"07f2d83c-63d4-40ec-8d8b-ab44f45ca400\") " pod="kube-system/coredns-7db6d8ff4d-stlxr" Apr 30 01:21:35.582912 containerd[1596]: time="2025-04-30T01:21:35.582707416Z" level=info msg="shim disconnected" id=fb2ddb4a92e2194e9126eb922a65e5cddc2f43586a00705843d96061aae8286c namespace=k8s.io Apr 30 01:21:35.582912 containerd[1596]: time="2025-04-30T01:21:35.582800538Z" level=warning msg="cleaning up after shim disconnected" id=fb2ddb4a92e2194e9126eb922a65e5cddc2f43586a00705843d96061aae8286c namespace=k8s.io Apr 30 01:21:35.582912 containerd[1596]: time="2025-04-30T01:21:35.582810499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:21:35.599510 containerd[1596]: time="2025-04-30T01:21:35.599155779Z" level=warning msg="cleanup warnings time=\"2025-04-30T01:21:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 01:21:35.786210 containerd[1596]: time="2025-04-30T01:21:35.785993608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t5nj,Uid:37fed3e1-0881-414f-8dab-a8370cfdc3cb,Namespace:kube-system,Attempt:0,}" Apr 30 01:21:35.789637 containerd[1596]: time="2025-04-30T01:21:35.789171133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b69776db-wk29h,Uid:67dae70a-bf89-45d5-a866-8ee292cf8980,Namespace:calico-system,Attempt:0,}" Apr 30 01:21:35.793701 containerd[1596]: time="2025-04-30T01:21:35.793304205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4cb765cd-t2ldw,Uid:ce510606-0f1b-4bd3-b5b8-10bbf3c549c3,Namespace:calico-apiserver,Attempt:0,}" Apr 30 01:21:35.796953 containerd[1596]: time="2025-04-30T01:21:35.796907982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4cb765cd-j4ngc,Uid:a72ecee6-7912-44aa-bb57-e01abdb75987,Namespace:calico-apiserver,Attempt:0,}" Apr 30 01:21:35.800353 containerd[1596]: time="2025-04-30T01:21:35.800167189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-stlxr,Uid:07f2d83c-63d4-40ec-8d8b-ab44f45ca400,Namespace:kube-system,Attempt:0,}" Apr 30 01:21:36.014039 containerd[1596]: time="2025-04-30T01:21:36.013770578Z" level=error msg="Failed to destroy network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.015040 containerd[1596]: time="2025-04-30T01:21:36.014448517Z" level=error msg="encountered an error cleaning up failed sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.015040 containerd[1596]: time="2025-04-30T01:21:36.014546319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4cb765cd-j4ngc,Uid:a72ecee6-7912-44aa-bb57-e01abdb75987,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.015457 kubelet[2977]: E0430 01:21:36.015406 2977 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.015559 kubelet[2977]: E0430 01:21:36.015481 2977 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4cb765cd-j4ngc" Apr 30 01:21:36.015559 kubelet[2977]: E0430 01:21:36.015499 2977 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4cb765cd-j4ngc" Apr 30 01:21:36.015559 kubelet[2977]: E0430 01:21:36.015543 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4cb765cd-j4ngc_calico-apiserver(a72ecee6-7912-44aa-bb57-e01abdb75987)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4cb765cd-j4ngc_calico-apiserver(a72ecee6-7912-44aa-bb57-e01abdb75987)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4cb765cd-j4ngc" podUID="a72ecee6-7912-44aa-bb57-e01abdb75987" Apr 30 01:21:36.019228 containerd[1596]: time="2025-04-30T01:21:36.019178604Z" level=error msg="Failed to destroy network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.020500 containerd[1596]: time="2025-04-30T01:21:36.019655176Z" level=error msg="encountered an error cleaning up failed sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.020639 containerd[1596]: time="2025-04-30T01:21:36.020521840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4cb765cd-t2ldw,Uid:ce510606-0f1b-4bd3-b5b8-10bbf3c549c3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.020804 kubelet[2977]: E0430 01:21:36.020720 2977 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.020860 kubelet[2977]: E0430 01:21:36.020824 2977 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4cb765cd-t2ldw" Apr 30 01:21:36.020860 kubelet[2977]: E0430 01:21:36.020855 2977 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4cb765cd-t2ldw" Apr 30 01:21:36.021138 kubelet[2977]: E0430 01:21:36.020896 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4cb765cd-t2ldw_calico-apiserver(ce510606-0f1b-4bd3-b5b8-10bbf3c549c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4cb765cd-t2ldw_calico-apiserver(ce510606-0f1b-4bd3-b5b8-10bbf3c549c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4cb765cd-t2ldw" podUID="ce510606-0f1b-4bd3-b5b8-10bbf3c549c3" Apr 30 01:21:36.023452 containerd[1596]: time="2025-04-30T01:21:36.023326435Z" level=error msg="Failed to destroy network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.024468 containerd[1596]: time="2025-04-30T01:21:36.024402384Z" level=error msg="encountered an error cleaning up failed sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.024559 containerd[1596]: time="2025-04-30T01:21:36.024483106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b69776db-wk29h,Uid:67dae70a-bf89-45d5-a866-8ee292cf8980,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.024803 containerd[1596]: time="2025-04-30T01:21:36.024602349Z" level=error msg="Failed to destroy network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.024852 kubelet[2977]: E0430 01:21:36.024821 2977 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.025055 kubelet[2977]: E0430 01:21:36.024872 2977 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b69776db-wk29h" Apr 30 01:21:36.025055 kubelet[2977]: E0430 01:21:36.024897 2977 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b69776db-wk29h" Apr 30 01:21:36.025664 kubelet[2977]: E0430 01:21:36.025221 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b69776db-wk29h_calico-system(67dae70a-bf89-45d5-a866-8ee292cf8980)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b69776db-wk29h_calico-system(67dae70a-bf89-45d5-a866-8ee292cf8980)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b69776db-wk29h" podUID="67dae70a-bf89-45d5-a866-8ee292cf8980" Apr 30 01:21:36.025766 containerd[1596]: time="2025-04-30T01:21:36.025552895Z" level=error msg="encountered an error cleaning up failed sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.025766 containerd[1596]: time="2025-04-30T01:21:36.025607856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t5nj,Uid:37fed3e1-0881-414f-8dab-a8370cfdc3cb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.026112 kubelet[2977]: E0430 01:21:36.026036 2977 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.026268 kubelet[2977]: E0430 01:21:36.026224 2977 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t5nj" Apr 30 01:21:36.026484 kubelet[2977]: E0430 01:21:36.026323 2977 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t5nj" Apr 30 01:21:36.026839 kubelet[2977]: E0430 01:21:36.026399 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7t5nj_kube-system(37fed3e1-0881-414f-8dab-a8370cfdc3cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7t5nj_kube-system(37fed3e1-0881-414f-8dab-a8370cfdc3cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7t5nj" podUID="37fed3e1-0881-414f-8dab-a8370cfdc3cb" Apr 30 01:21:36.038878 containerd[1596]: time="2025-04-30T01:21:36.038403600Z" level=error msg="Failed to destroy network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.040158 containerd[1596]: time="2025-04-30T01:21:36.040003323Z" level=error msg="encountered an error cleaning up failed sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.040158 containerd[1596]: time="2025-04-30T01:21:36.040101446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-stlxr,Uid:07f2d83c-63d4-40ec-8d8b-ab44f45ca400,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.040832 kubelet[2977]: E0430 01:21:36.040754 2977 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.040988 kubelet[2977]: E0430 01:21:36.040864 2977 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-stlxr" Apr 30 01:21:36.040988 kubelet[2977]: E0430 01:21:36.040894 2977 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-stlxr" Apr 30 01:21:36.041165 kubelet[2977]: E0430 01:21:36.040987 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-stlxr_kube-system(07f2d83c-63d4-40ec-8d8b-ab44f45ca400)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-stlxr_kube-system(07f2d83c-63d4-40ec-8d8b-ab44f45ca400)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-stlxr" podUID="07f2d83c-63d4-40ec-8d8b-ab44f45ca400" Apr 30 01:21:36.177769 containerd[1596]: time="2025-04-30T01:21:36.177510978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqtjj,Uid:9fa4d4e7-7c7d-4b64-b56b-683ffe29c791,Namespace:calico-system,Attempt:0,}" Apr 30 01:21:36.243507 containerd[1596]: time="2025-04-30T01:21:36.243377868Z" level=error msg="Failed to destroy network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.243987 containerd[1596]: time="2025-04-30T01:21:36.243871801Z" level=error msg="encountered an error cleaning up failed sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.243987 containerd[1596]: time="2025-04-30T01:21:36.243942083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqtjj,Uid:9fa4d4e7-7c7d-4b64-b56b-683ffe29c791,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.244421 kubelet[2977]: E0430 01:21:36.244355 2977 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.244500 kubelet[2977]: E0430 01:21:36.244433 2977 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rqtjj" Apr 30 01:21:36.244500 kubelet[2977]: E0430 01:21:36.244456 2977 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rqtjj" Apr 30 01:21:36.244561 kubelet[2977]: E0430 01:21:36.244500 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rqtjj_calico-system(9fa4d4e7-7c7d-4b64-b56b-683ffe29c791)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rqtjj_calico-system(9fa4d4e7-7c7d-4b64-b56b-683ffe29c791)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqtjj" podUID="9fa4d4e7-7c7d-4b64-b56b-683ffe29c791" Apr 30 01:21:36.319420 kubelet[2977]: I0430 01:21:36.318617 2977 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:21:36.320768 containerd[1596]: time="2025-04-30T01:21:36.319820402Z" level=info msg="StopPodSandbox for \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\"" Apr 30 01:21:36.320768 containerd[1596]: time="2025-04-30T01:21:36.320594182Z" level=info msg="Ensure that sandbox 2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0 in task-service has been cleanup successfully" Apr 30 01:21:36.321894 kubelet[2977]: I0430 01:21:36.321602 2977 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:21:36.323694 containerd[1596]: time="2025-04-30T01:21:36.323634704Z" level=info msg="StopPodSandbox for \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\"" Apr 30 01:21:36.323853 containerd[1596]: time="2025-04-30T01:21:36.323833429Z" level=info msg="Ensure that sandbox df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6 in task-service has been cleanup successfully" Apr 30 01:21:36.327314 kubelet[2977]: I0430 01:21:36.327281 2977 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:21:36.329403 containerd[1596]: time="2025-04-30T01:21:36.329369498Z" level=info msg="StopPodSandbox for \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\"" Apr 30 01:21:36.330987 containerd[1596]: time="2025-04-30T01:21:36.330663253Z" level=info msg="Ensure that sandbox bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410 in task-service has been cleanup successfully" Apr 30 01:21:36.333274 kubelet[2977]: I0430 01:21:36.332986 2977 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:21:36.335537 containerd[1596]: time="2025-04-30T01:21:36.334945048Z" level=info msg="StopPodSandbox for \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\"" Apr 30 01:21:36.336948 containerd[1596]: time="2025-04-30T01:21:36.336669374Z" level=info msg="Ensure that sandbox 9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99 in task-service has been cleanup successfully" Apr 30 01:21:36.349336 containerd[1596]: time="2025-04-30T01:21:36.349303754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 01:21:36.350985 kubelet[2977]: I0430 01:21:36.350846 2977 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:21:36.361460 containerd[1596]: time="2025-04-30T01:21:36.360976987Z" level=info msg="StopPodSandbox for \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\"" Apr 30 01:21:36.369723 containerd[1596]: time="2025-04-30T01:21:36.369673541Z" level=info msg="Ensure that sandbox 960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328 in task-service has been cleanup successfully" Apr 30 01:21:36.389434 kubelet[2977]: I0430 01:21:36.388302 2977 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:21:36.395987 containerd[1596]: time="2025-04-30T01:21:36.395928807Z" level=info msg="StopPodSandbox for \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\"" Apr 30 01:21:36.406164 containerd[1596]: time="2025-04-30T01:21:36.405664268Z" level=info msg="Ensure that sandbox 48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab in task-service has been cleanup successfully" Apr 30 01:21:36.472066 containerd[1596]: time="2025-04-30T01:21:36.471916568Z" level=error msg="StopPodSandbox for \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\" failed" error="failed to destroy network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.474053 kubelet[2977]: E0430 01:21:36.473380 2977 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:21:36.477196 kubelet[2977]: E0430 01:21:36.475665 2977 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328"} Apr 30 01:21:36.477196 kubelet[2977]: E0430 01:21:36.477116 2977 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37fed3e1-0881-414f-8dab-a8370cfdc3cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:36.477196 kubelet[2977]: E0430 01:21:36.477149 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37fed3e1-0881-414f-8dab-a8370cfdc3cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7t5nj" podUID="37fed3e1-0881-414f-8dab-a8370cfdc3cb" Apr 30 01:21:36.512820 containerd[1596]: time="2025-04-30T01:21:36.512529140Z" level=error msg="StopPodSandbox for \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\" failed" error="failed to destroy network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.514918 kubelet[2977]: E0430 01:21:36.513073 2977 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:21:36.514918 kubelet[2977]: E0430 01:21:36.513153 2977 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0"} Apr 30 01:21:36.514918 kubelet[2977]: E0430 01:21:36.513186 2977 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07f2d83c-63d4-40ec-8d8b-ab44f45ca400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:36.514918 kubelet[2977]: E0430 01:21:36.513216 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07f2d83c-63d4-40ec-8d8b-ab44f45ca400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-stlxr" podUID="07f2d83c-63d4-40ec-8d8b-ab44f45ca400" Apr 30 01:21:36.520819 containerd[1596]: time="2025-04-30T01:21:36.520667478Z" level=error msg="StopPodSandbox for \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\" failed" error="failed to destroy network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.523164 kubelet[2977]: E0430 01:21:36.522183 2977 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:21:36.523164 kubelet[2977]: E0430 01:21:36.522238 2977 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6"} Apr 30 01:21:36.523164 kubelet[2977]: E0430 01:21:36.522288 2977 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a72ecee6-7912-44aa-bb57-e01abdb75987\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:36.523164 kubelet[2977]: E0430 01:21:36.522310 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a72ecee6-7912-44aa-bb57-e01abdb75987\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4cb765cd-j4ngc" podUID="a72ecee6-7912-44aa-bb57-e01abdb75987" Apr 30 01:21:36.536297 containerd[1596]: time="2025-04-30T01:21:36.536248537Z" level=error msg="StopPodSandbox for \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\" failed" error="failed to destroy network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.536708 kubelet[2977]: E0430 01:21:36.536575 2977 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:21:36.536708 kubelet[2977]: E0430 01:21:36.536633 2977 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab"} Apr 30 01:21:36.536708 kubelet[2977]: E0430 01:21:36.536666 2977 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:36.537709 kubelet[2977]: E0430 01:21:36.536888 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqtjj" podUID="9fa4d4e7-7c7d-4b64-b56b-683ffe29c791" Apr 30 01:21:36.545149 containerd[1596]: time="2025-04-30T01:21:36.544224111Z" level=error msg="StopPodSandbox for \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\" failed" error="failed to destroy network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.545375 kubelet[2977]: E0430 01:21:36.544952 2977 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:21:36.545375 kubelet[2977]: E0430 01:21:36.545004 2977 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99"} Apr 30 01:21:36.545375 kubelet[2977]: E0430 01:21:36.545242 2977 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67dae70a-bf89-45d5-a866-8ee292cf8980\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:36.545375 kubelet[2977]: E0430 01:21:36.545272 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67dae70a-bf89-45d5-a866-8ee292cf8980\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b69776db-wk29h" podUID="67dae70a-bf89-45d5-a866-8ee292cf8980" Apr 30 01:21:36.546139 containerd[1596]: time="2025-04-30T01:21:36.545863315Z" level=error msg="StopPodSandbox for \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\" failed" error="failed to destroy network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 01:21:36.547486 kubelet[2977]: E0430 01:21:36.546495 2977 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:21:36.547486 kubelet[2977]: E0430 01:21:36.547297 2977 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410"} Apr 30 01:21:36.547486 kubelet[2977]: E0430 01:21:36.547386 2977 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce510606-0f1b-4bd3-b5b8-10bbf3c549c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 01:21:36.547486 kubelet[2977]: E0430 01:21:36.547429 2977 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce510606-0f1b-4bd3-b5b8-10bbf3c549c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4cb765cd-t2ldw" podUID="ce510606-0f1b-4bd3-b5b8-10bbf3c549c3" Apr 30 01:21:36.821224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6-shm.mount: Deactivated successfully. Apr 30 01:21:36.821978 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410-shm.mount: Deactivated successfully. Apr 30 01:21:36.822240 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99-shm.mount: Deactivated successfully. Apr 30 01:21:36.822410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328-shm.mount: Deactivated successfully. Apr 30 01:21:40.973808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055579583.mount: Deactivated successfully. Apr 30 01:21:41.011309 containerd[1596]: time="2025-04-30T01:21:41.010360135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:41.011309 containerd[1596]: time="2025-04-30T01:21:41.011257119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 01:21:41.012430 containerd[1596]: time="2025-04-30T01:21:41.012379029Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:41.016304 containerd[1596]: time="2025-04-30T01:21:41.016201251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:41.017396 containerd[1596]: time="2025-04-30T01:21:41.017355642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.667228065s" Apr 30 01:21:41.017396 containerd[1596]: time="2025-04-30T01:21:41.017395763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 01:21:41.031741 containerd[1596]: time="2025-04-30T01:21:41.031345974Z" level=info msg="CreateContainer within sandbox \"259deebea586cd829b3bb76455a93981d6d4a0a7debb212a614b92dd06f2306f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 01:21:41.054269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739848697.mount: Deactivated successfully. Apr 30 01:21:41.058036 containerd[1596]: time="2025-04-30T01:21:41.057878641Z" level=info msg="CreateContainer within sandbox \"259deebea586cd829b3bb76455a93981d6d4a0a7debb212a614b92dd06f2306f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a4b11a4251d964d65ab625154c902999828410f797b7a7668547e4df7f0569eb\"" Apr 30 01:21:41.059626 containerd[1596]: time="2025-04-30T01:21:41.059583607Z" level=info msg="StartContainer for \"a4b11a4251d964d65ab625154c902999828410f797b7a7668547e4df7f0569eb\"" Apr 30 01:21:41.132067 containerd[1596]: time="2025-04-30T01:21:41.131545165Z" level=info msg="StartContainer for \"a4b11a4251d964d65ab625154c902999828410f797b7a7668547e4df7f0569eb\" returns successfully" Apr 30 01:21:41.253274 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 01:21:41.253409 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 01:21:41.443994 kubelet[2977]: I0430 01:21:41.443856 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4w8qb" podStartSLOduration=0.983084259 podStartE2EDuration="13.441843474s" podCreationTimestamp="2025-04-30 01:21:28 +0000 UTC" firstStartedPulling="2025-04-30 01:21:28.559672775 +0000 UTC m=+24.503365210" lastFinishedPulling="2025-04-30 01:21:41.01843199 +0000 UTC m=+36.962124425" observedRunningTime="2025-04-30 01:21:41.438381941 +0000 UTC m=+37.382074376" watchObservedRunningTime="2025-04-30 01:21:41.441843474 +0000 UTC m=+37.385535909" Apr 30 01:21:42.997073 kernel: bpftool[4258]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 01:21:43.210129 systemd-networkd[1245]: vxlan.calico: Link UP Apr 30 01:21:43.210154 systemd-networkd[1245]: vxlan.calico: Gained carrier Apr 30 01:21:45.211426 systemd-networkd[1245]: vxlan.calico: Gained IPv6LL Apr 30 01:21:48.176044 containerd[1596]: time="2025-04-30T01:21:48.175817322Z" level=info msg="StopPodSandbox for \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\"" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.252 [INFO][4368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.253 [INFO][4368] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" iface="eth0" netns="/var/run/netns/cni-41dffccc-bed0-af98-ddcc-89d6db445fe9" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.253 [INFO][4368] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" iface="eth0" netns="/var/run/netns/cni-41dffccc-bed0-af98-ddcc-89d6db445fe9" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.254 [INFO][4368] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" iface="eth0" netns="/var/run/netns/cni-41dffccc-bed0-af98-ddcc-89d6db445fe9" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.254 [INFO][4368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.254 [INFO][4368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.296 [INFO][4375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.296 [INFO][4375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.296 [INFO][4375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.310 [WARNING][4375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.310 [INFO][4375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.312 [INFO][4375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:48.317268 containerd[1596]: 2025-04-30 01:21:48.315 [INFO][4368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:21:48.319374 containerd[1596]: time="2025-04-30T01:21:48.318476807Z" level=info msg="TearDown network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\" successfully" Apr 30 01:21:48.319374 containerd[1596]: time="2025-04-30T01:21:48.318512888Z" level=info msg="StopPodSandbox for \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\" returns successfully" Apr 30 01:21:48.320878 systemd[1]: run-netns-cni\x2d41dffccc\x2dbed0\x2daf98\x2dddcc\x2d89d6db445fe9.mount: Deactivated successfully. Apr 30 01:21:48.338479 containerd[1596]: time="2025-04-30T01:21:48.338381973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4cb765cd-j4ngc,Uid:a72ecee6-7912-44aa-bb57-e01abdb75987,Namespace:calico-apiserver,Attempt:1,}" Apr 30 01:21:48.531119 systemd-networkd[1245]: cali00aa0070d57: Link UP Apr 30 01:21:48.531376 systemd-networkd[1245]: cali00aa0070d57: Gained carrier Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.410 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0 calico-apiserver-6d4cb765cd- calico-apiserver a72ecee6-7912-44aa-bb57-e01abdb75987 741 0 2025-04-30 01:21:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4cb765cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-5-cafd7e9e76 calico-apiserver-6d4cb765cd-j4ngc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali00aa0070d57 [] []}} ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-j4ngc" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.410 [INFO][4382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-j4ngc" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.445 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" HandleID="k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.461 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" HandleID="k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d2ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-5-cafd7e9e76", "pod":"calico-apiserver-6d4cb765cd-j4ngc", "timestamp":"2025-04-30 01:21:48.445550801 +0000 UTC"}, Hostname:"ci-4081-3-3-5-cafd7e9e76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.461 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.462 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.462 [INFO][4394] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-5-cafd7e9e76' Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.465 [INFO][4394] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.472 [INFO][4394] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.479 [INFO][4394] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.483 [INFO][4394] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.489 [INFO][4394] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.489 [INFO][4394] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.492 [INFO][4394] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8 Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.499 [INFO][4394] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.512 [INFO][4394] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.129/26] block=192.168.112.128/26 handle="k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.512 [INFO][4394] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.129/26] handle="k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.512 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:48.555605 containerd[1596]: 2025-04-30 01:21:48.512 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.129/26] IPv6=[] ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" HandleID="k8s-pod-network.77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.556327 containerd[1596]: 2025-04-30 01:21:48.524 [INFO][4382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-j4ngc" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0", GenerateName:"calico-apiserver-6d4cb765cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a72ecee6-7912-44aa-bb57-e01abdb75987", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4cb765cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"", Pod:"calico-apiserver-6d4cb765cd-j4ngc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00aa0070d57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:48.556327 containerd[1596]: 2025-04-30 01:21:48.525 [INFO][4382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.129/32] ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-j4ngc" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.556327 containerd[1596]: 2025-04-30 01:21:48.525 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00aa0070d57 ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-j4ngc" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.556327 containerd[1596]: 2025-04-30 01:21:48.533 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-j4ngc" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.556327 containerd[1596]: 2025-04-30 01:21:48.534 [INFO][4382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-j4ngc" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0", GenerateName:"calico-apiserver-6d4cb765cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a72ecee6-7912-44aa-bb57-e01abdb75987", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4cb765cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8", Pod:"calico-apiserver-6d4cb765cd-j4ngc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00aa0070d57", MAC:"ea:97:0a:da:81:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:48.556327 containerd[1596]: 2025-04-30 01:21:48.548 [INFO][4382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-j4ngc" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:21:48.593618 containerd[1596]: time="2025-04-30T01:21:48.593300541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:48.593618 containerd[1596]: time="2025-04-30T01:21:48.593364823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:48.593618 containerd[1596]: time="2025-04-30T01:21:48.593381023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:48.593618 containerd[1596]: time="2025-04-30T01:21:48.593521387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:48.652478 containerd[1596]: time="2025-04-30T01:21:48.652436662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4cb765cd-j4ngc,Uid:a72ecee6-7912-44aa-bb57-e01abdb75987,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8\"" Apr 30 01:21:48.656208 containerd[1596]: time="2025-04-30T01:21:48.656169200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 01:21:49.173695 containerd[1596]: time="2025-04-30T01:21:49.173646093Z" level=info msg="StopPodSandbox for \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\"" Apr 30 01:21:49.174305 containerd[1596]: time="2025-04-30T01:21:49.174067104Z" level=info msg="StopPodSandbox for \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\"" Apr 30 01:21:49.176904 containerd[1596]: time="2025-04-30T01:21:49.176843937Z" level=info msg="StopPodSandbox for \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\"" Apr 30 01:21:49.177674 containerd[1596]: time="2025-04-30T01:21:49.177178146Z" level=info msg="StopPodSandbox for \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\"" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.285 [INFO][4514] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.285 [INFO][4514] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" iface="eth0" netns="/var/run/netns/cni-b886e057-eeac-3f6b-4f7e-167f362a1d0f" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.287 [INFO][4514] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" iface="eth0" netns="/var/run/netns/cni-b886e057-eeac-3f6b-4f7e-167f362a1d0f" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.290 [INFO][4514] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" iface="eth0" netns="/var/run/netns/cni-b886e057-eeac-3f6b-4f7e-167f362a1d0f" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.290 [INFO][4514] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.290 [INFO][4514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.347 [INFO][4545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.347 [INFO][4545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.350 [INFO][4545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.369 [WARNING][4545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.369 [INFO][4545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.375 [INFO][4545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.381116 containerd[1596]: 2025-04-30 01:21:49.377 [INFO][4514] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:21:49.382604 containerd[1596]: time="2025-04-30T01:21:49.382057627Z" level=info msg="TearDown network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\" successfully" Apr 30 01:21:49.382604 containerd[1596]: time="2025-04-30T01:21:49.382086987Z" level=info msg="StopPodSandbox for \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\" returns successfully" Apr 30 01:21:49.386053 containerd[1596]: time="2025-04-30T01:21:49.385682882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqtjj,Uid:9fa4d4e7-7c7d-4b64-b56b-683ffe29c791,Namespace:calico-system,Attempt:1,}" Apr 30 01:21:49.386394 systemd[1]: run-netns-cni\x2db886e057\x2deeac\x2d3f6b\x2d4f7e\x2d167f362a1d0f.mount: Deactivated successfully. Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.286 [INFO][4530] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.288 [INFO][4530] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" iface="eth0" netns="/var/run/netns/cni-b208e967-f3a4-7f19-c704-0e91b71bc945" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.288 [INFO][4530] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" iface="eth0" netns="/var/run/netns/cni-b208e967-f3a4-7f19-c704-0e91b71bc945" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.290 [INFO][4530] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" iface="eth0" netns="/var/run/netns/cni-b208e967-f3a4-7f19-c704-0e91b71bc945" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.290 [INFO][4530] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.290 [INFO][4530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.365 [INFO][4544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.365 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.375 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.391 [WARNING][4544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.391 [INFO][4544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.395 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.407175 containerd[1596]: 2025-04-30 01:21:49.400 [INFO][4530] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:21:49.407175 containerd[1596]: time="2025-04-30T01:21:49.405524605Z" level=info msg="TearDown network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\" successfully" Apr 30 01:21:49.407175 containerd[1596]: time="2025-04-30T01:21:49.405558326Z" level=info msg="StopPodSandbox for \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\" returns successfully" Apr 30 01:21:49.410304 containerd[1596]: time="2025-04-30T01:21:49.410268570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4cb765cd-t2ldw,Uid:ce510606-0f1b-4bd3-b5b8-10bbf3c549c3,Namespace:calico-apiserver,Attempt:1,}" Apr 30 01:21:49.410304 systemd[1]: run-netns-cni\x2db208e967\x2df3a4\x2d7f19\x2dc704\x2d0e91b71bc945.mount: Deactivated successfully. Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.317 [INFO][4510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.319 [INFO][4510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" iface="eth0" netns="/var/run/netns/cni-94d255d4-8903-7cd4-1df1-83c692a0daf2" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.319 [INFO][4510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" iface="eth0" netns="/var/run/netns/cni-94d255d4-8903-7cd4-1df1-83c692a0daf2" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.320 [INFO][4510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" iface="eth0" netns="/var/run/netns/cni-94d255d4-8903-7cd4-1df1-83c692a0daf2" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.320 [INFO][4510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.320 [INFO][4510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.370 [INFO][4555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.371 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.395 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.415 [WARNING][4555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.415 [INFO][4555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.419 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.430911 containerd[1596]: 2025-04-30 01:21:49.427 [INFO][4510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:21:49.434247 containerd[1596]: time="2025-04-30T01:21:49.434095959Z" level=info msg="TearDown network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\" successfully" Apr 30 01:21:49.434247 containerd[1596]: time="2025-04-30T01:21:49.434148520Z" level=info msg="StopPodSandbox for \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\" returns successfully" Apr 30 01:21:49.439283 systemd[1]: run-netns-cni\x2d94d255d4\x2d8903\x2d7cd4\x2d1df1\x2d83c692a0daf2.mount: Deactivated successfully. Apr 30 01:21:49.440957 containerd[1596]: time="2025-04-30T01:21:49.440923539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b69776db-wk29h,Uid:67dae70a-bf89-45d5-a866-8ee292cf8980,Namespace:calico-system,Attempt:1,}" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.317 [INFO][4521] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.317 [INFO][4521] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" iface="eth0" netns="/var/run/netns/cni-ea908d6c-844f-23a2-488d-61704eb8b2b3" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.318 [INFO][4521] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" iface="eth0" netns="/var/run/netns/cni-ea908d6c-844f-23a2-488d-61704eb8b2b3" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.319 [INFO][4521] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" iface="eth0" netns="/var/run/netns/cni-ea908d6c-844f-23a2-488d-61704eb8b2b3" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.319 [INFO][4521] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.319 [INFO][4521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.374 [INFO][4553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.374 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.424 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.449 [WARNING][4553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.449 [INFO][4553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.454 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.468344 containerd[1596]: 2025-04-30 01:21:49.463 [INFO][4521] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:21:49.468984 containerd[1596]: time="2025-04-30T01:21:49.468523626Z" level=info msg="TearDown network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\" successfully" Apr 30 01:21:49.468984 containerd[1596]: time="2025-04-30T01:21:49.468549747Z" level=info msg="StopPodSandbox for \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\" returns successfully" Apr 30 01:21:49.469513 containerd[1596]: time="2025-04-30T01:21:49.469224165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t5nj,Uid:37fed3e1-0881-414f-8dab-a8370cfdc3cb,Namespace:kube-system,Attempt:1,}" Apr 30 01:21:49.692506 systemd-networkd[1245]: cali00aa0070d57: Gained IPv6LL Apr 30 01:21:49.751424 systemd-networkd[1245]: cali260860ac8f1: Link UP Apr 30 01:21:49.752742 systemd-networkd[1245]: cali260860ac8f1: Gained carrier Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.503 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0 csi-node-driver- calico-system 9fa4d4e7-7c7d-4b64-b56b-683ffe29c791 753 0 2025-04-30 01:21:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-5-cafd7e9e76 csi-node-driver-rqtjj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali260860ac8f1 [] []}} ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Namespace="calico-system" Pod="csi-node-driver-rqtjj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.503 [INFO][4571] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Namespace="calico-system" Pod="csi-node-driver-rqtjj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.633 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" HandleID="k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.669 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" HandleID="k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031a760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-5-cafd7e9e76", "pod":"csi-node-driver-rqtjj", "timestamp":"2025-04-30 01:21:49.633486575 +0000 UTC"}, Hostname:"ci-4081-3-3-5-cafd7e9e76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.671 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.673 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.673 [INFO][4622] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-5-cafd7e9e76' Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.679 [INFO][4622] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.689 [INFO][4622] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.707 [INFO][4622] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.712 [INFO][4622] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.716 [INFO][4622] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.716 [INFO][4622] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.719 [INFO][4622] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.727 [INFO][4622] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.737 [INFO][4622] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.130/26] block=192.168.112.128/26 handle="k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.737 [INFO][4622] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.130/26] handle="k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.737 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.797212 containerd[1596]: 2025-04-30 01:21:49.737 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.130/26] IPv6=[] ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" HandleID="k8s-pod-network.04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.797816 containerd[1596]: 2025-04-30 01:21:49.746 [INFO][4571] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Namespace="calico-system" Pod="csi-node-driver-rqtjj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"", Pod:"csi-node-driver-rqtjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali260860ac8f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:49.797816 containerd[1596]: 2025-04-30 01:21:49.746 [INFO][4571] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.130/32] ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Namespace="calico-system" Pod="csi-node-driver-rqtjj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.797816 containerd[1596]: 2025-04-30 01:21:49.746 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali260860ac8f1 ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Namespace="calico-system" Pod="csi-node-driver-rqtjj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.797816 containerd[1596]: 2025-04-30 01:21:49.752 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Namespace="calico-system" Pod="csi-node-driver-rqtjj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.797816 containerd[1596]: 2025-04-30 01:21:49.755 [INFO][4571] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Namespace="calico-system" Pod="csi-node-driver-rqtjj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a", Pod:"csi-node-driver-rqtjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali260860ac8f1", MAC:"1a:3b:35:02:ef:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:49.797816 containerd[1596]: 2025-04-30 01:21:49.785 [INFO][4571] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a" Namespace="calico-system" Pod="csi-node-driver-rqtjj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:21:49.826467 containerd[1596]: time="2025-04-30T01:21:49.826243096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:49.826690 containerd[1596]: time="2025-04-30T01:21:49.826498543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:49.826690 containerd[1596]: time="2025-04-30T01:21:49.826527904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:49.826690 containerd[1596]: time="2025-04-30T01:21:49.826641507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:49.865834 systemd-networkd[1245]: cali9906271b199: Link UP Apr 30 01:21:49.865968 systemd-networkd[1245]: cali9906271b199: Gained carrier Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.622 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0 calico-apiserver-6d4cb765cd- calico-apiserver ce510606-0f1b-4bd3-b5b8-10bbf3c549c3 752 0 2025-04-30 01:21:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4cb765cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-5-cafd7e9e76 calico-apiserver-6d4cb765cd-t2ldw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9906271b199 [] []}} ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-t2ldw" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.623 [INFO][4583] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-t2ldw" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.747 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" HandleID="k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.794 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" HandleID="k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2ed0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-5-cafd7e9e76", "pod":"calico-apiserver-6d4cb765cd-t2ldw", "timestamp":"2025-04-30 01:21:49.747212373 +0000 UTC"}, Hostname:"ci-4081-3-3-5-cafd7e9e76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.794 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.794 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.794 [INFO][4633] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-5-cafd7e9e76' Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.800 [INFO][4633] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.812 [INFO][4633] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.822 [INFO][4633] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.824 [INFO][4633] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.828 [INFO][4633] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.829 [INFO][4633] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.832 [INFO][4633] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9 Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.839 [INFO][4633] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.854 [INFO][4633] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.131/26] block=192.168.112.128/26 handle="k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.854 [INFO][4633] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.131/26] handle="k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.855 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.891878 containerd[1596]: 2025-04-30 01:21:49.855 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.131/26] IPv6=[] ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" HandleID="k8s-pod-network.3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.893125 containerd[1596]: 2025-04-30 01:21:49.860 [INFO][4583] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-t2ldw" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0", GenerateName:"calico-apiserver-6d4cb765cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce510606-0f1b-4bd3-b5b8-10bbf3c549c3", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4cb765cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"", Pod:"calico-apiserver-6d4cb765cd-t2ldw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9906271b199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:49.893125 containerd[1596]: 2025-04-30 01:21:49.861 [INFO][4583] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.131/32] ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-t2ldw" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.893125 containerd[1596]: 2025-04-30 01:21:49.861 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9906271b199 ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-t2ldw" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.893125 containerd[1596]: 2025-04-30 01:21:49.863 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-t2ldw" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.893125 containerd[1596]: 2025-04-30 01:21:49.867 [INFO][4583] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-t2ldw" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0", GenerateName:"calico-apiserver-6d4cb765cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce510606-0f1b-4bd3-b5b8-10bbf3c549c3", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4cb765cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9", Pod:"calico-apiserver-6d4cb765cd-t2ldw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9906271b199", MAC:"3e:66:e9:57:79:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:49.893125 containerd[1596]: 2025-04-30 01:21:49.884 [INFO][4583] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d4cb765cd-t2ldw" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:21:49.914169 containerd[1596]: time="2025-04-30T01:21:49.913775884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqtjj,Uid:9fa4d4e7-7c7d-4b64-b56b-683ffe29c791,Namespace:calico-system,Attempt:1,} returns sandbox id \"04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a\"" Apr 30 01:21:49.940836 systemd-networkd[1245]: cali01849663c9d: Link UP Apr 30 01:21:49.941470 systemd-networkd[1245]: cali01849663c9d: Gained carrier Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.624 [INFO][4611] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0 coredns-7db6d8ff4d- kube-system 37fed3e1-0881-414f-8dab-a8370cfdc3cb 754 0 2025-04-30 01:21:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-5-cafd7e9e76 coredns-7db6d8ff4d-7t5nj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01849663c9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t5nj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.624 [INFO][4611] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t5nj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.759 [INFO][4631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" HandleID="k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.799 [INFO][4631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" HandleID="k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027cb40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-5-cafd7e9e76", "pod":"coredns-7db6d8ff4d-7t5nj", "timestamp":"2025-04-30 01:21:49.758511191 +0000 UTC"}, Hostname:"ci-4081-3-3-5-cafd7e9e76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.800 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.855 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.855 [INFO][4631] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-5-cafd7e9e76' Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.858 [INFO][4631] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.868 [INFO][4631] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.876 [INFO][4631] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.882 [INFO][4631] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.890 [INFO][4631] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.890 [INFO][4631] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.895 [INFO][4631] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.904 [INFO][4631] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.918 [INFO][4631] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.132/26] block=192.168.112.128/26 handle="k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.919 [INFO][4631] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.132/26] handle="k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.919 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:49.975165 containerd[1596]: 2025-04-30 01:21:49.919 [INFO][4631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.132/26] IPv6=[] ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" HandleID="k8s-pod-network.4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.975919 containerd[1596]: 2025-04-30 01:21:49.930 [INFO][4611] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t5nj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"37fed3e1-0881-414f-8dab-a8370cfdc3cb", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"", Pod:"coredns-7db6d8ff4d-7t5nj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01849663c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:49.975919 containerd[1596]: 2025-04-30 01:21:49.931 [INFO][4611] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.132/32] ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t5nj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.975919 containerd[1596]: 2025-04-30 01:21:49.931 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01849663c9d ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t5nj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.975919 containerd[1596]: 2025-04-30 01:21:49.941 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t5nj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:49.975919 containerd[1596]: 2025-04-30 01:21:49.942 [INFO][4611] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t5nj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"37fed3e1-0881-414f-8dab-a8370cfdc3cb", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be", Pod:"coredns-7db6d8ff4d-7t5nj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01849663c9d", MAC:"fe:72:71:ad:6e:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:49.975919 containerd[1596]: 2025-04-30 01:21:49.963 [INFO][4611] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t5nj" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:21:50.018032 containerd[1596]: time="2025-04-30T01:21:50.002132933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:50.018032 containerd[1596]: time="2025-04-30T01:21:50.002252856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:50.018032 containerd[1596]: time="2025-04-30T01:21:50.002268376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:50.018032 containerd[1596]: time="2025-04-30T01:21:50.002419780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:50.032089 systemd-networkd[1245]: cali86a7d9ad277: Link UP Apr 30 01:21:50.033769 systemd-networkd[1245]: cali86a7d9ad277: Gained carrier Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.625 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0 calico-kube-controllers-68b69776db- calico-system 67dae70a-bf89-45d5-a866-8ee292cf8980 755 0 2025-04-30 01:21:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68b69776db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-5-cafd7e9e76 calico-kube-controllers-68b69776db-wk29h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali86a7d9ad277 [] []}} ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Namespace="calico-system" Pod="calico-kube-controllers-68b69776db-wk29h" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.627 [INFO][4595] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Namespace="calico-system" Pod="calico-kube-controllers-68b69776db-wk29h" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.785 [INFO][4641] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" HandleID="k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.808 [INFO][4641] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" HandleID="k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a0e50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-5-cafd7e9e76", "pod":"calico-kube-controllers-68b69776db-wk29h", "timestamp":"2025-04-30 01:21:49.785122692 +0000 UTC"}, Hostname:"ci-4081-3-3-5-cafd7e9e76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.808 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.919 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.919 [INFO][4641] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-5-cafd7e9e76' Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.931 [INFO][4641] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.960 [INFO][4641] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.970 [INFO][4641] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.976 [INFO][4641] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.982 [INFO][4641] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.984 [INFO][4641] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.988 [INFO][4641] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788 Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:49.997 [INFO][4641] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:50.010 [INFO][4641] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.133/26] block=192.168.112.128/26 handle="k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:50.010 [INFO][4641] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.133/26] handle="k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:50.010 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:50.068342 containerd[1596]: 2025-04-30 01:21:50.010 [INFO][4641] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.133/26] IPv6=[] ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" HandleID="k8s-pod-network.ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:50.068908 containerd[1596]: 2025-04-30 01:21:50.015 [INFO][4595] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Namespace="calico-system" Pod="calico-kube-controllers-68b69776db-wk29h" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0", GenerateName:"calico-kube-controllers-68b69776db-", Namespace:"calico-system", SelfLink:"", UID:"67dae70a-bf89-45d5-a866-8ee292cf8980", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b69776db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"", Pod:"calico-kube-controllers-68b69776db-wk29h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86a7d9ad277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:50.068908 containerd[1596]: 2025-04-30 01:21:50.016 [INFO][4595] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.133/32] ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Namespace="calico-system" Pod="calico-kube-controllers-68b69776db-wk29h" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:50.068908 containerd[1596]: 2025-04-30 01:21:50.017 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86a7d9ad277 ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Namespace="calico-system" Pod="calico-kube-controllers-68b69776db-wk29h" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:50.068908 containerd[1596]: 2025-04-30 01:21:50.035 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Namespace="calico-system" Pod="calico-kube-controllers-68b69776db-wk29h" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:50.068908 containerd[1596]: 2025-04-30 01:21:50.037 [INFO][4595] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Namespace="calico-system" Pod="calico-kube-controllers-68b69776db-wk29h" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0", GenerateName:"calico-kube-controllers-68b69776db-", Namespace:"calico-system", SelfLink:"", UID:"67dae70a-bf89-45d5-a866-8ee292cf8980", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b69776db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788", Pod:"calico-kube-controllers-68b69776db-wk29h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86a7d9ad277", MAC:"ba:66:a7:cb:dd:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:50.068908 containerd[1596]: 2025-04-30 01:21:50.063 [INFO][4595] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788" Namespace="calico-system" Pod="calico-kube-controllers-68b69776db-wk29h" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:21:50.080373 containerd[1596]: time="2025-04-30T01:21:50.070830622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:50.080373 containerd[1596]: time="2025-04-30T01:21:50.070943104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:50.080373 containerd[1596]: time="2025-04-30T01:21:50.070960465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:50.080373 containerd[1596]: time="2025-04-30T01:21:50.071998892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:50.160813 containerd[1596]: time="2025-04-30T01:21:50.160514183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:50.160813 containerd[1596]: time="2025-04-30T01:21:50.160586545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:50.160813 containerd[1596]: time="2025-04-30T01:21:50.160598585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:50.160813 containerd[1596]: time="2025-04-30T01:21:50.160686907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:50.183040 containerd[1596]: time="2025-04-30T01:21:50.182763089Z" level=info msg="StopPodSandbox for \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\"" Apr 30 01:21:50.203691 containerd[1596]: time="2025-04-30T01:21:50.203279549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t5nj,Uid:37fed3e1-0881-414f-8dab-a8370cfdc3cb,Namespace:kube-system,Attempt:1,} returns sandbox id \"4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be\"" Apr 30 01:21:50.213209 containerd[1596]: time="2025-04-30T01:21:50.213006245Z" level=info msg="CreateContainer within sandbox \"4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 01:21:50.216801 containerd[1596]: time="2025-04-30T01:21:50.216763424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4cb765cd-t2ldw,Uid:ce510606-0f1b-4bd3-b5b8-10bbf3c549c3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9\"" Apr 30 01:21:50.259355 containerd[1596]: time="2025-04-30T01:21:50.259202661Z" level=info msg="CreateContainer within sandbox \"4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8ffd25a8780b2f80b278484d8fb7beeb534b22adb297dc9688250ec6bf2ff8c\"" Apr 30 01:21:50.262664 containerd[1596]: time="2025-04-30T01:21:50.262510669Z" level=info msg="StartContainer for \"a8ffd25a8780b2f80b278484d8fb7beeb534b22adb297dc9688250ec6bf2ff8c\"" Apr 30 01:21:50.387436 containerd[1596]: time="2025-04-30T01:21:50.387392477Z" level=info msg="StartContainer for \"a8ffd25a8780b2f80b278484d8fb7beeb534b22adb297dc9688250ec6bf2ff8c\" returns successfully" Apr 30 01:21:50.406739 systemd[1]: run-netns-cni\x2dea908d6c\x2d844f\x2d23a2\x2d488d\x2d61704eb8b2b3.mount: Deactivated successfully. Apr 30 01:21:50.414663 containerd[1596]: time="2025-04-30T01:21:50.411950643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b69776db-wk29h,Uid:67dae70a-bf89-45d5-a866-8ee292cf8980,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788\"" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.317 [INFO][4877] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.317 [INFO][4877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" iface="eth0" netns="/var/run/netns/cni-18afe78a-704f-245a-23ad-60923273b068" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.317 [INFO][4877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" iface="eth0" netns="/var/run/netns/cni-18afe78a-704f-245a-23ad-60923273b068" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.318 [INFO][4877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" iface="eth0" netns="/var/run/netns/cni-18afe78a-704f-245a-23ad-60923273b068" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.319 [INFO][4877] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.319 [INFO][4877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.398 [INFO][4908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.401 [INFO][4908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.405 [INFO][4908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.432 [WARNING][4908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.432 [INFO][4908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.437 [INFO][4908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:50.448443 containerd[1596]: 2025-04-30 01:21:50.443 [INFO][4877] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:21:50.452408 containerd[1596]: time="2025-04-30T01:21:50.452363467Z" level=info msg="TearDown network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\" successfully" Apr 30 01:21:50.452408 containerd[1596]: time="2025-04-30T01:21:50.452402668Z" level=info msg="StopPodSandbox for \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\" returns successfully" Apr 30 01:21:50.454972 systemd[1]: run-netns-cni\x2d18afe78a\x2d704f\x2d245a\x2d23ad\x2d60923273b068.mount: Deactivated successfully. Apr 30 01:21:50.459092 containerd[1596]: time="2025-04-30T01:21:50.457724569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-stlxr,Uid:07f2d83c-63d4-40ec-8d8b-ab44f45ca400,Namespace:kube-system,Attempt:1,}" Apr 30 01:21:50.532214 kubelet[2977]: I0430 01:21:50.531647 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7t5nj" podStartSLOduration=31.531625474 podStartE2EDuration="31.531625474s" podCreationTimestamp="2025-04-30 01:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:50.505365263 +0000 UTC m=+46.449057698" watchObservedRunningTime="2025-04-30 01:21:50.531625474 +0000 UTC m=+46.475317909" Apr 30 01:21:50.722739 systemd-networkd[1245]: calif464bfa2a59: Link UP Apr 30 01:21:50.722970 systemd-networkd[1245]: calif464bfa2a59: Gained carrier Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.609 [INFO][4942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0 coredns-7db6d8ff4d- kube-system 07f2d83c-63d4-40ec-8d8b-ab44f45ca400 774 0 2025-04-30 01:21:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-5-cafd7e9e76 coredns-7db6d8ff4d-stlxr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif464bfa2a59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stlxr" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.610 [INFO][4942] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stlxr" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.655 [INFO][4955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" HandleID="k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.670 [INFO][4955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" HandleID="k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000382aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-5-cafd7e9e76", "pod":"coredns-7db6d8ff4d-stlxr", "timestamp":"2025-04-30 01:21:50.655782983 +0000 UTC"}, Hostname:"ci-4081-3-3-5-cafd7e9e76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.671 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.671 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.671 [INFO][4955] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-5-cafd7e9e76' Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.673 [INFO][4955] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.679 [INFO][4955] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.686 [INFO][4955] ipam/ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.689 [INFO][4955] ipam/ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.694 [INFO][4955] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.694 [INFO][4955] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.697 [INFO][4955] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561 Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.704 [INFO][4955] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.715 [INFO][4955] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.112.134/26] block=192.168.112.128/26 handle="k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.715 [INFO][4955] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.134/26] handle="k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" host="ci-4081-3-3-5-cafd7e9e76" Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.715 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:21:50.747904 containerd[1596]: 2025-04-30 01:21:50.715 [INFO][4955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.112.134/26] IPv6=[] ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" HandleID="k8s-pod-network.07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.751592 containerd[1596]: 2025-04-30 01:21:50.718 [INFO][4942] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stlxr" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"07f2d83c-63d4-40ec-8d8b-ab44f45ca400", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"", Pod:"coredns-7db6d8ff4d-stlxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif464bfa2a59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:50.751592 containerd[1596]: 2025-04-30 01:21:50.718 [INFO][4942] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.112.134/32] ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stlxr" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.751592 containerd[1596]: 2025-04-30 01:21:50.718 [INFO][4942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif464bfa2a59 ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stlxr" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.751592 containerd[1596]: 2025-04-30 01:21:50.724 [INFO][4942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stlxr" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.751592 containerd[1596]: 2025-04-30 01:21:50.726 [INFO][4942] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stlxr" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"07f2d83c-63d4-40ec-8d8b-ab44f45ca400", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561", Pod:"coredns-7db6d8ff4d-stlxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif464bfa2a59", MAC:"fa:76:83:c3:1b:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:21:50.751592 containerd[1596]: 2025-04-30 01:21:50.743 [INFO][4942] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stlxr" WorkloadEndpoint="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:21:50.786770 containerd[1596]: time="2025-04-30T01:21:50.786608708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 01:21:50.786770 containerd[1596]: time="2025-04-30T01:21:50.786676230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 01:21:50.787120 containerd[1596]: time="2025-04-30T01:21:50.786696070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:50.787120 containerd[1596]: time="2025-04-30T01:21:50.786812354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 01:21:50.848805 containerd[1596]: time="2025-04-30T01:21:50.848686143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-stlxr,Uid:07f2d83c-63d4-40ec-8d8b-ab44f45ca400,Namespace:kube-system,Attempt:1,} returns sandbox id \"07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561\"" Apr 30 01:21:50.856667 containerd[1596]: time="2025-04-30T01:21:50.856246342Z" level=info msg="CreateContainer within sandbox \"07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 01:21:50.872632 containerd[1596]: time="2025-04-30T01:21:50.872582052Z" level=info msg="CreateContainer within sandbox \"07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d0a9215d2fc6edf8c69552e39adc40f11cce1f712e08851c0255bf23191c2fa\"" Apr 30 01:21:50.874840 containerd[1596]: time="2025-04-30T01:21:50.874773430Z" level=info msg="StartContainer for \"4d0a9215d2fc6edf8c69552e39adc40f11cce1f712e08851c0255bf23191c2fa\"" Apr 30 01:21:50.965342 containerd[1596]: time="2025-04-30T01:21:50.962518340Z" level=info msg="StartContainer for \"4d0a9215d2fc6edf8c69552e39adc40f11cce1f712e08851c0255bf23191c2fa\" returns successfully" Apr 30 01:21:51.164289 systemd-networkd[1245]: cali86a7d9ad277: Gained IPv6LL Apr 30 01:21:51.355303 systemd-networkd[1245]: cali9906271b199: Gained IPv6LL Apr 30 01:21:51.370360 containerd[1596]: time="2025-04-30T01:21:51.370184183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:51.372936 containerd[1596]: time="2025-04-30T01:21:51.372882574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 01:21:51.374249 containerd[1596]: time="2025-04-30T01:21:51.374207009Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:51.378414 containerd[1596]: time="2025-04-30T01:21:51.378089631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:51.379072 containerd[1596]: time="2025-04-30T01:21:51.379025615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.722794214s" Apr 30 01:21:51.379333 containerd[1596]: time="2025-04-30T01:21:51.379223341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 01:21:51.381083 containerd[1596]: time="2025-04-30T01:21:51.380809622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 01:21:51.387851 containerd[1596]: time="2025-04-30T01:21:51.387731004Z" level=info msg="CreateContainer within sandbox \"77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 01:21:51.408078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475981513.mount: Deactivated successfully. Apr 30 01:21:51.412001 containerd[1596]: time="2025-04-30T01:21:51.411910200Z" level=info msg="CreateContainer within sandbox \"77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"48926756f9c505f5082d92a0367892772915e8e09aa30c1a1794135614bcbe99\"" Apr 30 01:21:51.416420 containerd[1596]: time="2025-04-30T01:21:51.415117085Z" level=info msg="StartContainer for \"48926756f9c505f5082d92a0367892772915e8e09aa30c1a1794135614bcbe99\"" Apr 30 01:21:51.494829 containerd[1596]: time="2025-04-30T01:21:51.493993839Z" level=info msg="StartContainer for \"48926756f9c505f5082d92a0367892772915e8e09aa30c1a1794135614bcbe99\" returns successfully" Apr 30 01:21:51.580291 kubelet[2977]: I0430 01:21:51.579976 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-stlxr" podStartSLOduration=32.57995458 podStartE2EDuration="32.57995458s" podCreationTimestamp="2025-04-30 01:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 01:21:51.55790176 +0000 UTC m=+47.501594155" watchObservedRunningTime="2025-04-30 01:21:51.57995458 +0000 UTC m=+47.523646975" Apr 30 01:21:51.582304 kubelet[2977]: I0430 01:21:51.582232 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4cb765cd-j4ngc" podStartSLOduration=21.857469254 podStartE2EDuration="24.582211799s" podCreationTimestamp="2025-04-30 01:21:27 +0000 UTC" firstStartedPulling="2025-04-30 01:21:48.655435941 +0000 UTC m=+44.599128376" lastFinishedPulling="2025-04-30 01:21:51.380178486 +0000 UTC m=+47.323870921" observedRunningTime="2025-04-30 01:21:51.579508648 +0000 UTC m=+47.523201163" watchObservedRunningTime="2025-04-30 01:21:51.582211799 +0000 UTC m=+47.525904234" Apr 30 01:21:51.676050 systemd-networkd[1245]: cali01849663c9d: Gained IPv6LL Apr 30 01:21:51.740468 systemd-networkd[1245]: cali260860ac8f1: Gained IPv6LL Apr 30 01:21:52.379406 systemd-networkd[1245]: calif464bfa2a59: Gained IPv6LL Apr 30 01:21:52.553823 kubelet[2977]: I0430 01:21:52.553780 2977 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 01:21:53.083194 containerd[1596]: time="2025-04-30T01:21:53.082343459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:53.084141 containerd[1596]: time="2025-04-30T01:21:53.084098985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 01:21:53.084841 containerd[1596]: time="2025-04-30T01:21:53.084799764Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:53.087821 containerd[1596]: time="2025-04-30T01:21:53.087766521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:53.088703 containerd[1596]: time="2025-04-30T01:21:53.088639384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.707787321s" Apr 30 01:21:53.088831 containerd[1596]: time="2025-04-30T01:21:53.088814749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 01:21:53.091269 containerd[1596]: time="2025-04-30T01:21:53.090963325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 01:21:53.092558 containerd[1596]: time="2025-04-30T01:21:53.092528326Z" level=info msg="CreateContainer within sandbox \"04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 01:21:53.113356 containerd[1596]: time="2025-04-30T01:21:53.113301792Z" level=info msg="CreateContainer within sandbox \"04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1db2e4528c9532c5a0588dcf634bd82c37d7f91efa16c1f6b2c60a61a0799222\"" Apr 30 01:21:53.114720 containerd[1596]: time="2025-04-30T01:21:53.114252856Z" level=info msg="StartContainer for \"1db2e4528c9532c5a0588dcf634bd82c37d7f91efa16c1f6b2c60a61a0799222\"" Apr 30 01:21:53.195479 containerd[1596]: time="2025-04-30T01:21:53.195430747Z" level=info msg="StartContainer for \"1db2e4528c9532c5a0588dcf634bd82c37d7f91efa16c1f6b2c60a61a0799222\" returns successfully" Apr 30 01:21:53.490656 containerd[1596]: time="2025-04-30T01:21:53.489716470Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:53.492209 containerd[1596]: time="2025-04-30T01:21:53.492118893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 01:21:53.494679 containerd[1596]: time="2025-04-30T01:21:53.494633039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 403.632152ms" Apr 30 01:21:53.494849 containerd[1596]: time="2025-04-30T01:21:53.494828804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 01:21:53.496642 containerd[1596]: time="2025-04-30T01:21:53.496603170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 01:21:53.498211 containerd[1596]: time="2025-04-30T01:21:53.497917005Z" level=info msg="CreateContainer within sandbox \"3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 01:21:53.525586 containerd[1596]: time="2025-04-30T01:21:53.525536810Z" level=info msg="CreateContainer within sandbox \"3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cb92aee861a22efedeab0932a8f3d41a6f0a5da63e0fd0be64fa107af8110606\"" Apr 30 01:21:53.526409 containerd[1596]: time="2025-04-30T01:21:53.526369312Z" level=info msg="StartContainer for \"cb92aee861a22efedeab0932a8f3d41a6f0a5da63e0fd0be64fa107af8110606\"" Apr 30 01:21:53.606001 containerd[1596]: time="2025-04-30T01:21:53.604548603Z" level=info msg="StartContainer for \"cb92aee861a22efedeab0932a8f3d41a6f0a5da63e0fd0be64fa107af8110606\" returns successfully" Apr 30 01:21:54.877508 kubelet[2977]: I0430 01:21:54.877342 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4cb765cd-t2ldw" podStartSLOduration=24.607905268 podStartE2EDuration="27.87731166s" podCreationTimestamp="2025-04-30 01:21:27 +0000 UTC" firstStartedPulling="2025-04-30 01:21:50.226249794 +0000 UTC m=+46.169942229" lastFinishedPulling="2025-04-30 01:21:53.495656186 +0000 UTC m=+49.439348621" observedRunningTime="2025-04-30 01:21:54.594044594 +0000 UTC m=+50.537737029" watchObservedRunningTime="2025-04-30 01:21:54.87731166 +0000 UTC m=+50.821004095" Apr 30 01:21:55.382315 containerd[1596]: time="2025-04-30T01:21:55.382189406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:55.383817 containerd[1596]: time="2025-04-30T01:21:55.383754567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 01:21:55.385221 containerd[1596]: time="2025-04-30T01:21:55.385123643Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:55.388557 containerd[1596]: time="2025-04-30T01:21:55.388489131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:55.389358 containerd[1596]: time="2025-04-30T01:21:55.389183989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.892539218s" Apr 30 01:21:55.389358 containerd[1596]: time="2025-04-30T01:21:55.389223950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 01:21:55.392072 containerd[1596]: time="2025-04-30T01:21:55.391560891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 01:21:55.409027 containerd[1596]: time="2025-04-30T01:21:55.408971707Z" level=info msg="CreateContainer within sandbox \"ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 01:21:55.432791 containerd[1596]: time="2025-04-30T01:21:55.432720049Z" level=info msg="CreateContainer within sandbox \"ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a49c8acf713b245475afe1a088fb7fd0e50237e40f82059bc8d391a250fac689\"" Apr 30 01:21:55.432971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608287509.mount: Deactivated successfully. Apr 30 01:21:55.434976 containerd[1596]: time="2025-04-30T01:21:55.433838198Z" level=info msg="StartContainer for \"a49c8acf713b245475afe1a088fb7fd0e50237e40f82059bc8d391a250fac689\"" Apr 30 01:21:55.505903 containerd[1596]: time="2025-04-30T01:21:55.505684760Z" level=info msg="StartContainer for \"a49c8acf713b245475afe1a088fb7fd0e50237e40f82059bc8d391a250fac689\" returns successfully" Apr 30 01:21:55.626792 kubelet[2977]: I0430 01:21:55.625352 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68b69776db-wk29h" podStartSLOduration=22.651244311 podStartE2EDuration="27.625330853s" podCreationTimestamp="2025-04-30 01:21:28 +0000 UTC" firstStartedPulling="2025-04-30 01:21:50.416667168 +0000 UTC m=+46.360359603" lastFinishedPulling="2025-04-30 01:21:55.39075371 +0000 UTC m=+51.334446145" observedRunningTime="2025-04-30 01:21:55.618632358 +0000 UTC m=+51.562324793" watchObservedRunningTime="2025-04-30 01:21:55.625330853 +0000 UTC m=+51.569023248" Apr 30 01:21:56.904349 containerd[1596]: time="2025-04-30T01:21:56.904262164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:56.906070 containerd[1596]: time="2025-04-30T01:21:56.905995649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 01:21:56.907335 containerd[1596]: time="2025-04-30T01:21:56.907282723Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:56.910618 containerd[1596]: time="2025-04-30T01:21:56.910540648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 01:21:56.912994 containerd[1596]: time="2025-04-30T01:21:56.912924631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.521321818s" Apr 30 01:21:56.912994 containerd[1596]: time="2025-04-30T01:21:56.912978592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 01:21:56.917185 containerd[1596]: time="2025-04-30T01:21:56.917126781Z" level=info msg="CreateContainer within sandbox \"04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 01:21:56.938253 containerd[1596]: time="2025-04-30T01:21:56.938207652Z" level=info msg="CreateContainer within sandbox \"04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"067911c0179a5a8f275e99cd66e26deca107d9ab783145356e30904cde173f62\"" Apr 30 01:21:56.940551 containerd[1596]: time="2025-04-30T01:21:56.940497792Z" level=info msg="StartContainer for \"067911c0179a5a8f275e99cd66e26deca107d9ab783145356e30904cde173f62\"" Apr 30 01:21:57.008672 containerd[1596]: time="2025-04-30T01:21:57.008619854Z" level=info msg="StartContainer for \"067911c0179a5a8f275e99cd66e26deca107d9ab783145356e30904cde173f62\" returns successfully" Apr 30 01:21:57.312005 kubelet[2977]: I0430 01:21:57.311965 2977 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 01:21:57.312424 kubelet[2977]: I0430 01:21:57.312044 2977 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 01:22:04.185037 containerd[1596]: time="2025-04-30T01:22:04.184750679Z" level=info msg="StopPodSandbox for \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\"" Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.247 [WARNING][5303] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"37fed3e1-0881-414f-8dab-a8370cfdc3cb", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be", Pod:"coredns-7db6d8ff4d-7t5nj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01849663c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.247 [INFO][5303] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.247 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" iface="eth0" netns="" Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.247 [INFO][5303] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.247 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.285 [INFO][5310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.285 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.285 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.298 [WARNING][5310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.298 [INFO][5310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.300 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:04.303802 containerd[1596]: 2025-04-30 01:22:04.302 [INFO][5303] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:22:04.303802 containerd[1596]: time="2025-04-30T01:22:04.303666409Z" level=info msg="TearDown network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\" successfully" Apr 30 01:22:04.303802 containerd[1596]: time="2025-04-30T01:22:04.303695889Z" level=info msg="StopPodSandbox for \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\" returns successfully" Apr 30 01:22:04.304674 containerd[1596]: time="2025-04-30T01:22:04.304608353Z" level=info msg="RemovePodSandbox for \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\"" Apr 30 01:22:04.311510 containerd[1596]: time="2025-04-30T01:22:04.311423090Z" level=info msg="Forcibly stopping sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\"" Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.357 [WARNING][5328] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"37fed3e1-0881-414f-8dab-a8370cfdc3cb", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"4e11b5f5a3569032033a7aeed28fb0f0966ca32e8a8b63662ba2712f832a56be", Pod:"coredns-7db6d8ff4d-7t5nj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01849663c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.357 [INFO][5328] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.357 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" iface="eth0" netns="" Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.357 [INFO][5328] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.357 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.377 [INFO][5335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.377 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.377 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.388 [WARNING][5335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.388 [INFO][5335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" HandleID="k8s-pod-network.960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--7t5nj-eth0" Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.390 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:04.394301 containerd[1596]: 2025-04-30 01:22:04.392 [INFO][5328] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328" Apr 30 01:22:04.394854 containerd[1596]: time="2025-04-30T01:22:04.394370526Z" level=info msg="TearDown network for sandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\" successfully" Apr 30 01:22:04.420570 containerd[1596]: time="2025-04-30T01:22:04.420501965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:04.420777 containerd[1596]: time="2025-04-30T01:22:04.420591087Z" level=info msg="RemovePodSandbox \"960edc958ed36a9c0300635bae3b403afee669e792f88d4cc535bf27978ac328\" returns successfully" Apr 30 01:22:04.421420 containerd[1596]: time="2025-04-30T01:22:04.421240184Z" level=info msg="StopPodSandbox for \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\"" Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.464 [WARNING][5353] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0", GenerateName:"calico-kube-controllers-68b69776db-", Namespace:"calico-system", SelfLink:"", UID:"67dae70a-bf89-45d5-a866-8ee292cf8980", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b69776db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788", Pod:"calico-kube-controllers-68b69776db-wk29h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86a7d9ad277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.464 [INFO][5353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.464 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" iface="eth0" netns="" Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.464 [INFO][5353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.464 [INFO][5353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.504 [INFO][5360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.506 [INFO][5360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.506 [INFO][5360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.519 [WARNING][5360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.519 [INFO][5360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.521 [INFO][5360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:04.525379 containerd[1596]: 2025-04-30 01:22:04.523 [INFO][5353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:22:04.525927 containerd[1596]: time="2025-04-30T01:22:04.525890583Z" level=info msg="TearDown network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\" successfully" Apr 30 01:22:04.525996 containerd[1596]: time="2025-04-30T01:22:04.525982586Z" level=info msg="StopPodSandbox for \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\" returns successfully" Apr 30 01:22:04.526580 containerd[1596]: time="2025-04-30T01:22:04.526532400Z" level=info msg="RemovePodSandbox for \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\"" Apr 30 01:22:04.526580 containerd[1596]: time="2025-04-30T01:22:04.526578201Z" level=info msg="Forcibly stopping sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\"" Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.577 [WARNING][5378] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0", GenerateName:"calico-kube-controllers-68b69776db-", Namespace:"calico-system", SelfLink:"", UID:"67dae70a-bf89-45d5-a866-8ee292cf8980", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b69776db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"ae616d4046d9e926d5fbac5fff808f48055f37b4394008c15ae4960632e9b788", Pod:"calico-kube-controllers-68b69776db-wk29h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86a7d9ad277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.578 [INFO][5378] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.578 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" iface="eth0" netns="" Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.578 [INFO][5378] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.578 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.602 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.602 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.602 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.616 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.616 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" HandleID="k8s-pod-network.9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--kube--controllers--68b69776db--wk29h-eth0" Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.619 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:04.624191 containerd[1596]: 2025-04-30 01:22:04.622 [INFO][5378] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99" Apr 30 01:22:04.624638 containerd[1596]: time="2025-04-30T01:22:04.624262299Z" level=info msg="TearDown network for sandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\" successfully" Apr 30 01:22:04.628069 containerd[1596]: time="2025-04-30T01:22:04.628022237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:04.628176 containerd[1596]: time="2025-04-30T01:22:04.628107679Z" level=info msg="RemovePodSandbox \"9218ca5495b785ad60df1d6a7d11c27c2b0b12c274665d3a9a3c838fca78ac99\" returns successfully" Apr 30 01:22:04.628612 containerd[1596]: time="2025-04-30T01:22:04.628583772Z" level=info msg="StopPodSandbox for \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\"" Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.672 [WARNING][5404] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0", GenerateName:"calico-apiserver-6d4cb765cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce510606-0f1b-4bd3-b5b8-10bbf3c549c3", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4cb765cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9", Pod:"calico-apiserver-6d4cb765cd-t2ldw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9906271b199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.672 [INFO][5404] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.672 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" iface="eth0" netns="" Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.672 [INFO][5404] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.672 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.693 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.693 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.693 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.704 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.705 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.707 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:04.710810 containerd[1596]: 2025-04-30 01:22:04.709 [INFO][5404] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:22:04.711541 containerd[1596]: time="2025-04-30T01:22:04.710877710Z" level=info msg="TearDown network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\" successfully" Apr 30 01:22:04.711541 containerd[1596]: time="2025-04-30T01:22:04.710910591Z" level=info msg="StopPodSandbox for \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\" returns successfully" Apr 30 01:22:04.711541 containerd[1596]: time="2025-04-30T01:22:04.711492086Z" level=info msg="RemovePodSandbox for \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\"" Apr 30 01:22:04.711541 containerd[1596]: time="2025-04-30T01:22:04.711539407Z" level=info msg="Forcibly stopping sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\"" Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.753 [WARNING][5429] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0", GenerateName:"calico-apiserver-6d4cb765cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce510606-0f1b-4bd3-b5b8-10bbf3c549c3", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4cb765cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"3ced2ecbf6420e41a52bdb34038ec244682dc688f69ba3f3acafae2cbd05c6d9", Pod:"calico-apiserver-6d4cb765cd-t2ldw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9906271b199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.753 [INFO][5429] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.753 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" iface="eth0" netns="" Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.753 [INFO][5429] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.753 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.776 [INFO][5436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.776 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.776 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.787 [WARNING][5436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.787 [INFO][5436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" HandleID="k8s-pod-network.bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--t2ldw-eth0" Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.790 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:04.794705 containerd[1596]: 2025-04-30 01:22:04.792 [INFO][5429] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410" Apr 30 01:22:04.794705 containerd[1596]: time="2025-04-30T01:22:04.794662367Z" level=info msg="TearDown network for sandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\" successfully" Apr 30 01:22:04.800314 containerd[1596]: time="2025-04-30T01:22:04.800243032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:04.801063 containerd[1596]: time="2025-04-30T01:22:04.800370955Z" level=info msg="RemovePodSandbox \"bd6be251701703d71c5981dc72f35b68b1b9ba6d45fe8c89122422ba622fd410\" returns successfully" Apr 30 01:22:04.801288 containerd[1596]: time="2025-04-30T01:22:04.801120575Z" level=info msg="StopPodSandbox for \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\"" Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.850 [WARNING][5454] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a", Pod:"csi-node-driver-rqtjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali260860ac8f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.851 [INFO][5454] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.851 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" iface="eth0" netns="" Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.851 [INFO][5454] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.851 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.871 [INFO][5461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.872 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.872 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.883 [WARNING][5461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.883 [INFO][5461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.888 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:04.892264 containerd[1596]: 2025-04-30 01:22:04.889 [INFO][5454] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:22:04.893589 containerd[1596]: time="2025-04-30T01:22:04.892298144Z" level=info msg="TearDown network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\" successfully" Apr 30 01:22:04.893589 containerd[1596]: time="2025-04-30T01:22:04.892326505Z" level=info msg="StopPodSandbox for \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\" returns successfully" Apr 30 01:22:04.893589 containerd[1596]: time="2025-04-30T01:22:04.893523016Z" level=info msg="RemovePodSandbox for \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\"" Apr 30 01:22:04.893589 containerd[1596]: time="2025-04-30T01:22:04.893558737Z" level=info msg="Forcibly stopping sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\"" Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.933 [WARNING][5479] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fa4d4e7-7c7d-4b64-b56b-683ffe29c791", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"04da888daa6363c236b4519c051dd2d51bf0f4cda8a2e258b4fa20f3ddec8f3a", Pod:"csi-node-driver-rqtjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali260860ac8f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.934 [INFO][5479] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.934 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" iface="eth0" netns="" Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.934 [INFO][5479] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.934 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.958 [INFO][5488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.958 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.958 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.967 [WARNING][5488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.968 [INFO][5488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" HandleID="k8s-pod-network.48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-csi--node--driver--rqtjj-eth0" Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.970 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:04.974089 containerd[1596]: 2025-04-30 01:22:04.972 [INFO][5479] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab" Apr 30 01:22:04.974646 containerd[1596]: time="2025-04-30T01:22:04.974177752Z" level=info msg="TearDown network for sandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\" successfully" Apr 30 01:22:04.978358 containerd[1596]: time="2025-04-30T01:22:04.978170616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:04.978485 containerd[1596]: time="2025-04-30T01:22:04.978415782Z" level=info msg="RemovePodSandbox \"48b1dfcf85826925a04bc8229bb0446aacb8082a68baa38015f88a0c51e475ab\" returns successfully" Apr 30 01:22:04.979533 containerd[1596]: time="2025-04-30T01:22:04.978972396Z" level=info msg="StopPodSandbox for \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\"" Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.024 [WARNING][5506] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"07f2d83c-63d4-40ec-8d8b-ab44f45ca400", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561", Pod:"coredns-7db6d8ff4d-stlxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif464bfa2a59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.025 [INFO][5506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.025 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" iface="eth0" netns="" Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.025 [INFO][5506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.025 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.048 [INFO][5514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.048 [INFO][5514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.048 [INFO][5514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.058 [WARNING][5514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.058 [INFO][5514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.060 [INFO][5514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:05.064378 containerd[1596]: 2025-04-30 01:22:05.062 [INFO][5506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:22:05.066340 containerd[1596]: time="2025-04-30T01:22:05.065995896Z" level=info msg="TearDown network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\" successfully" Apr 30 01:22:05.066340 containerd[1596]: time="2025-04-30T01:22:05.066043498Z" level=info msg="StopPodSandbox for \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\" returns successfully" Apr 30 01:22:05.067310 containerd[1596]: time="2025-04-30T01:22:05.067144046Z" level=info msg="RemovePodSandbox for \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\"" Apr 30 01:22:05.067310 containerd[1596]: time="2025-04-30T01:22:05.067187487Z" level=info msg="Forcibly stopping sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\"" Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.108 [WARNING][5533] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"07f2d83c-63d4-40ec-8d8b-ab44f45ca400", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"07594503cbeae843f603ce353149774f0a991955c128b74aa06a851c56e2c561", Pod:"coredns-7db6d8ff4d-stlxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif464bfa2a59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.109 [INFO][5533] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.109 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" iface="eth0" netns="" Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.109 [INFO][5533] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.109 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.133 [INFO][5540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.133 [INFO][5540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.133 [INFO][5540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.146 [WARNING][5540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.146 [INFO][5540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" HandleID="k8s-pod-network.2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-coredns--7db6d8ff4d--stlxr-eth0" Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.148 [INFO][5540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:05.151900 containerd[1596]: 2025-04-30 01:22:05.150 [INFO][5533] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0" Apr 30 01:22:05.153970 containerd[1596]: time="2025-04-30T01:22:05.152454901Z" level=info msg="TearDown network for sandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\" successfully" Apr 30 01:22:05.156485 containerd[1596]: time="2025-04-30T01:22:05.156291441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:05.156485 containerd[1596]: time="2025-04-30T01:22:05.156379523Z" level=info msg="RemovePodSandbox \"2d9a5cc7c1cb68002269d558e0e2b091eb99d086a1237b3c0dc00860e0d175a0\" returns successfully" Apr 30 01:22:05.157081 containerd[1596]: time="2025-04-30T01:22:05.157002819Z" level=info msg="StopPodSandbox for \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\"" Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.202 [WARNING][5559] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0", GenerateName:"calico-apiserver-6d4cb765cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a72ecee6-7912-44aa-bb57-e01abdb75987", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4cb765cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8", Pod:"calico-apiserver-6d4cb765cd-j4ngc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00aa0070d57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.202 [INFO][5559] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.202 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" iface="eth0" netns="" Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.202 [INFO][5559] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.202 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.222 [INFO][5566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.222 [INFO][5566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.222 [INFO][5566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.238 [WARNING][5566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.238 [INFO][5566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.241 [INFO][5566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:05.244300 containerd[1596]: 2025-04-30 01:22:05.242 [INFO][5559] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:22:05.244300 containerd[1596]: time="2025-04-30T01:22:05.244151082Z" level=info msg="TearDown network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\" successfully" Apr 30 01:22:05.244300 containerd[1596]: time="2025-04-30T01:22:05.244177323Z" level=info msg="StopPodSandbox for \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\" returns successfully" Apr 30 01:22:05.245633 containerd[1596]: time="2025-04-30T01:22:05.245577359Z" level=info msg="RemovePodSandbox for \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\"" Apr 30 01:22:05.245633 containerd[1596]: time="2025-04-30T01:22:05.245631201Z" level=info msg="Forcibly stopping sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\"" Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.293 [WARNING][5584] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0", GenerateName:"calico-apiserver-6d4cb765cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a72ecee6-7912-44aa-bb57-e01abdb75987", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 1, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4cb765cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-5-cafd7e9e76", ContainerID:"77b00cfc654908d83994648009dbd071a828425eee2e58052ca2cebac33ef8f8", Pod:"calico-apiserver-6d4cb765cd-j4ngc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00aa0070d57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.293 [INFO][5584] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.293 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" iface="eth0" netns="" Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.293 [INFO][5584] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.293 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.316 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.317 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.317 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.329 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.329 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" HandleID="k8s-pod-network.df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Workload="ci--4081--3--3--5--cafd7e9e76-k8s-calico--apiserver--6d4cb765cd--j4ngc-eth0" Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.331 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 01:22:05.336160 containerd[1596]: 2025-04-30 01:22:05.333 [INFO][5584] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6" Apr 30 01:22:05.336160 containerd[1596]: time="2025-04-30T01:22:05.335106564Z" level=info msg="TearDown network for sandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\" successfully" Apr 30 01:22:05.339090 containerd[1596]: time="2025-04-30T01:22:05.338763259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 01:22:05.339090 containerd[1596]: time="2025-04-30T01:22:05.338886262Z" level=info msg="RemovePodSandbox \"df3070ccc0dccd0469b38c2e74f3b110cb85ae7070c374d14b70903c01efb5e6\" returns successfully" Apr 30 01:22:09.747965 systemd[1]: run-containerd-runc-k8s.io-a4b11a4251d964d65ab625154c902999828410f797b7a7668547e4df7f0569eb-runc.6ZpXSJ.mount: Deactivated successfully. Apr 30 01:22:09.820047 kubelet[2977]: I0430 01:22:09.818555 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rqtjj" podStartSLOduration=34.820913824 podStartE2EDuration="41.818469649s" podCreationTimestamp="2025-04-30 01:21:28 +0000 UTC" firstStartedPulling="2025-04-30 01:21:49.916604958 +0000 UTC m=+45.860297353" lastFinishedPulling="2025-04-30 01:21:56.914160743 +0000 UTC m=+52.857853178" observedRunningTime="2025-04-30 01:21:57.617675894 +0000 UTC m=+53.561368329" watchObservedRunningTime="2025-04-30 01:22:09.818469649 +0000 UTC m=+65.762162084" Apr 30 01:22:15.253301 kubelet[2977]: I0430 01:22:15.252320 2977 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 01:23:39.748726 systemd[1]: run-containerd-runc-k8s.io-a4b11a4251d964d65ab625154c902999828410f797b7a7668547e4df7f0569eb-runc.qJsOtd.mount: Deactivated successfully. Apr 30 01:24:35.811880 systemd[1]: run-containerd-runc-k8s.io-a49c8acf713b245475afe1a088fb7fd0e50237e40f82059bc8d391a250fac689-runc.FQQbJq.mount: Deactivated successfully. Apr 30 01:25:46.831516 systemd[1]: Started sshd@7-78.47.197.16:22-139.178.68.195:54558.service - OpenSSH per-connection server daemon (139.178.68.195:54558). Apr 30 01:25:47.828794 sshd[6078]: Accepted publickey for core from 139.178.68.195 port 54558 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:25:47.832829 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:25:47.838762 systemd-logind[1565]: New session 8 of user core. Apr 30 01:25:47.844414 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 01:25:48.610338 sshd[6078]: pam_unix(sshd:session): session closed for user core Apr 30 01:25:48.616540 systemd[1]: sshd@7-78.47.197.16:22-139.178.68.195:54558.service: Deactivated successfully. Apr 30 01:25:48.622959 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 01:25:48.624131 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Apr 30 01:25:48.626559 systemd-logind[1565]: Removed session 8. Apr 30 01:25:52.795421 systemd[1]: run-containerd-runc-k8s.io-a49c8acf713b245475afe1a088fb7fd0e50237e40f82059bc8d391a250fac689-runc.5Ah3Ho.mount: Deactivated successfully. Apr 30 01:25:53.785396 systemd[1]: Started sshd@8-78.47.197.16:22-139.178.68.195:54570.service - OpenSSH per-connection server daemon (139.178.68.195:54570). Apr 30 01:25:54.767879 sshd[6117]: Accepted publickey for core from 139.178.68.195 port 54570 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:25:54.769426 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:25:54.776686 systemd-logind[1565]: New session 9 of user core. Apr 30 01:25:54.782479 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 01:25:55.532755 sshd[6117]: pam_unix(sshd:session): session closed for user core Apr 30 01:25:55.540410 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Apr 30 01:25:55.541082 systemd[1]: sshd@8-78.47.197.16:22-139.178.68.195:54570.service: Deactivated successfully. Apr 30 01:25:55.544318 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 01:25:55.546040 systemd-logind[1565]: Removed session 9. Apr 30 01:26:00.700432 systemd[1]: Started sshd@9-78.47.197.16:22-139.178.68.195:59132.service - OpenSSH per-connection server daemon (139.178.68.195:59132). Apr 30 01:26:01.699114 sshd[6132]: Accepted publickey for core from 139.178.68.195 port 59132 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:01.702130 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:01.710436 systemd-logind[1565]: New session 10 of user core. Apr 30 01:26:01.717701 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 01:26:02.466575 sshd[6132]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:02.473187 systemd[1]: sshd@9-78.47.197.16:22-139.178.68.195:59132.service: Deactivated successfully. Apr 30 01:26:02.477074 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Apr 30 01:26:02.477837 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 01:26:02.481608 systemd-logind[1565]: Removed session 10. Apr 30 01:26:02.632391 systemd[1]: Started sshd@10-78.47.197.16:22-139.178.68.195:59142.service - OpenSSH per-connection server daemon (139.178.68.195:59142). Apr 30 01:26:03.611870 sshd[6146]: Accepted publickey for core from 139.178.68.195 port 59142 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:03.614273 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:03.623926 systemd-logind[1565]: New session 11 of user core. Apr 30 01:26:03.631366 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 01:26:04.407150 sshd[6146]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:04.414300 systemd[1]: sshd@10-78.47.197.16:22-139.178.68.195:59142.service: Deactivated successfully. Apr 30 01:26:04.419952 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 01:26:04.422963 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Apr 30 01:26:04.424579 systemd-logind[1565]: Removed session 11. Apr 30 01:26:04.574370 systemd[1]: Started sshd@11-78.47.197.16:22-139.178.68.195:59148.service - OpenSSH per-connection server daemon (139.178.68.195:59148). Apr 30 01:26:05.541525 sshd[6164]: Accepted publickey for core from 139.178.68.195 port 59148 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:05.543710 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:05.549100 systemd-logind[1565]: New session 12 of user core. Apr 30 01:26:05.554553 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 01:26:05.816458 systemd[1]: run-containerd-runc-k8s.io-a49c8acf713b245475afe1a088fb7fd0e50237e40f82059bc8d391a250fac689-runc.0bLjCp.mount: Deactivated successfully. Apr 30 01:26:06.306538 sshd[6164]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:06.311199 systemd[1]: sshd@11-78.47.197.16:22-139.178.68.195:59148.service: Deactivated successfully. Apr 30 01:26:06.316586 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Apr 30 01:26:06.316967 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 01:26:06.319636 systemd-logind[1565]: Removed session 12. Apr 30 01:26:11.475004 systemd[1]: Started sshd@12-78.47.197.16:22-139.178.68.195:48410.service - OpenSSH per-connection server daemon (139.178.68.195:48410). Apr 30 01:26:12.441848 sshd[6218]: Accepted publickey for core from 139.178.68.195 port 48410 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:12.444333 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:12.451588 systemd-logind[1565]: New session 13 of user core. Apr 30 01:26:12.455370 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 01:26:13.187801 sshd[6218]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:13.195066 systemd[1]: sshd@12-78.47.197.16:22-139.178.68.195:48410.service: Deactivated successfully. Apr 30 01:26:13.199006 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 01:26:13.200593 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Apr 30 01:26:13.201724 systemd-logind[1565]: Removed session 13. Apr 30 01:26:13.350393 systemd[1]: Started sshd@13-78.47.197.16:22-139.178.68.195:48426.service - OpenSSH per-connection server daemon (139.178.68.195:48426). Apr 30 01:26:14.337863 sshd[6231]: Accepted publickey for core from 139.178.68.195 port 48426 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:14.339537 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:14.346167 systemd-logind[1565]: New session 14 of user core. Apr 30 01:26:14.353644 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 01:26:15.204409 sshd[6231]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:15.209969 systemd[1]: sshd@13-78.47.197.16:22-139.178.68.195:48426.service: Deactivated successfully. Apr 30 01:26:15.214998 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Apr 30 01:26:15.216769 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 01:26:15.219451 systemd-logind[1565]: Removed session 14. Apr 30 01:26:15.374382 systemd[1]: Started sshd@14-78.47.197.16:22-139.178.68.195:47390.service - OpenSSH per-connection server daemon (139.178.68.195:47390). Apr 30 01:26:16.357611 sshd[6243]: Accepted publickey for core from 139.178.68.195 port 47390 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:16.361580 sshd[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:16.366752 systemd-logind[1565]: New session 15 of user core. Apr 30 01:26:16.371587 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 01:26:19.098554 sshd[6243]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:19.104129 systemd[1]: sshd@14-78.47.197.16:22-139.178.68.195:47390.service: Deactivated successfully. Apr 30 01:26:19.111204 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 01:26:19.112900 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Apr 30 01:26:19.115685 systemd-logind[1565]: Removed session 15. Apr 30 01:26:19.269215 systemd[1]: Started sshd@15-78.47.197.16:22-139.178.68.195:47404.service - OpenSSH per-connection server daemon (139.178.68.195:47404). Apr 30 01:26:20.278989 sshd[6262]: Accepted publickey for core from 139.178.68.195 port 47404 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:20.281326 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:20.286941 systemd-logind[1565]: New session 16 of user core. Apr 30 01:26:20.293458 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 01:26:21.173134 sshd[6262]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:21.178218 systemd[1]: sshd@15-78.47.197.16:22-139.178.68.195:47404.service: Deactivated successfully. Apr 30 01:26:21.184501 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 01:26:21.184674 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Apr 30 01:26:21.186613 systemd-logind[1565]: Removed session 16. Apr 30 01:26:21.339475 systemd[1]: Started sshd@16-78.47.197.16:22-139.178.68.195:47406.service - OpenSSH per-connection server daemon (139.178.68.195:47406). Apr 30 01:26:22.325211 sshd[6276]: Accepted publickey for core from 139.178.68.195 port 47406 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:22.328646 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:22.335439 systemd-logind[1565]: New session 17 of user core. Apr 30 01:26:22.339518 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 01:26:23.084504 sshd[6276]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:23.088690 systemd[1]: sshd@16-78.47.197.16:22-139.178.68.195:47406.service: Deactivated successfully. Apr 30 01:26:23.092852 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 01:26:23.094588 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Apr 30 01:26:23.096538 systemd-logind[1565]: Removed session 17. Apr 30 01:26:28.250537 systemd[1]: Started sshd@17-78.47.197.16:22-139.178.68.195:35004.service - OpenSSH per-connection server daemon (139.178.68.195:35004). Apr 30 01:26:29.219523 sshd[6302]: Accepted publickey for core from 139.178.68.195 port 35004 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:29.222713 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:29.228996 systemd-logind[1565]: New session 18 of user core. Apr 30 01:26:29.234634 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 01:26:29.967981 sshd[6302]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:29.973711 systemd[1]: sshd@17-78.47.197.16:22-139.178.68.195:35004.service: Deactivated successfully. Apr 30 01:26:29.980490 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 01:26:29.981619 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Apr 30 01:26:29.983273 systemd-logind[1565]: Removed session 18. Apr 30 01:26:35.138614 systemd[1]: Started sshd@18-78.47.197.16:22-139.178.68.195:35018.service - OpenSSH per-connection server daemon (139.178.68.195:35018). Apr 30 01:26:36.129384 sshd[6321]: Accepted publickey for core from 139.178.68.195 port 35018 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:36.132044 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:36.137655 systemd-logind[1565]: New session 19 of user core. Apr 30 01:26:36.143530 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 01:26:36.893976 sshd[6321]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:36.903193 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Apr 30 01:26:36.903648 systemd[1]: sshd@18-78.47.197.16:22-139.178.68.195:35018.service: Deactivated successfully. Apr 30 01:26:36.907586 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 01:26:36.910974 systemd-logind[1565]: Removed session 19. Apr 30 01:26:42.057340 systemd[1]: Started sshd@19-78.47.197.16:22-139.178.68.195:55318.service - OpenSSH per-connection server daemon (139.178.68.195:55318). Apr 30 01:26:43.037252 sshd[6377]: Accepted publickey for core from 139.178.68.195 port 55318 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:43.040469 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:43.051043 systemd-logind[1565]: New session 20 of user core. Apr 30 01:26:43.054377 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 01:26:43.793567 sshd[6377]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:43.798099 systemd[1]: sshd@19-78.47.197.16:22-139.178.68.195:55318.service: Deactivated successfully. Apr 30 01:26:43.803176 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 01:26:43.803182 systemd-logind[1565]: Session 20 logged out. Waiting for processes to exit. Apr 30 01:26:43.806179 systemd-logind[1565]: Removed session 20. Apr 30 01:26:48.966404 systemd[1]: Started sshd@20-78.47.197.16:22-139.178.68.195:43046.service - OpenSSH per-connection server daemon (139.178.68.195:43046). Apr 30 01:26:49.958550 sshd[6391]: Accepted publickey for core from 139.178.68.195 port 43046 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:49.960128 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:49.964249 systemd-logind[1565]: New session 21 of user core. Apr 30 01:26:49.969300 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 01:26:50.726792 sshd[6391]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:50.733603 systemd[1]: sshd@20-78.47.197.16:22-139.178.68.195:43046.service: Deactivated successfully. Apr 30 01:26:50.739149 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 01:26:50.741564 systemd-logind[1565]: Session 21 logged out. Waiting for processes to exit. Apr 30 01:26:50.744634 systemd-logind[1565]: Removed session 21. Apr 30 01:26:55.889433 systemd[1]: Started sshd@21-78.47.197.16:22-139.178.68.195:43938.service - OpenSSH per-connection server daemon (139.178.68.195:43938). Apr 30 01:26:56.871969 sshd[6426]: Accepted publickey for core from 139.178.68.195 port 43938 ssh2: RSA SHA256:guQRvUOPKM6rhTi9HiaHHvKlymi7GBfMGSY9fjztuKs Apr 30 01:26:56.873210 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:26:56.879269 systemd-logind[1565]: New session 22 of user core. Apr 30 01:26:56.884360 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 01:26:57.619939 sshd[6426]: pam_unix(sshd:session): session closed for user core Apr 30 01:26:57.628029 systemd[1]: sshd@21-78.47.197.16:22-139.178.68.195:43938.service: Deactivated successfully. Apr 30 01:26:57.628309 systemd-logind[1565]: Session 22 logged out. Waiting for processes to exit. Apr 30 01:26:57.631817 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 01:26:57.633632 systemd-logind[1565]: Removed session 22. Apr 30 01:27:05.813561 systemd[1]: run-containerd-runc-k8s.io-a49c8acf713b245475afe1a088fb7fd0e50237e40f82059bc8d391a250fac689-runc.REcRht.mount: Deactivated successfully. Apr 30 01:27:08.594158 systemd[1]: Started sshd@22-78.47.197.16:22-117.50.80.200:51800.service - OpenSSH per-connection server daemon (117.50.80.200:51800). Apr 30 01:27:08.604385 sshd[6467]: Connection closed by 117.50.80.200 port 51800 Apr 30 01:27:08.606719 systemd[1]: sshd@22-78.47.197.16:22-117.50.80.200:51800.service: Deactivated successfully. Apr 30 01:27:13.404752 kubelet[2977]: I0430 01:27:13.404605 2977 status_manager.go:853] "Failed to get status for pod" podUID="4d07f70aa0952ca4fa7f61fec4f71b3a" pod="kube-system/kube-apiserver-ci-4081-3-3-5-cafd7e9e76" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36498->10.0.0.2:2379: read: connection timed out" Apr 30 01:27:13.410117 kubelet[2977]: E0430 01:27:13.408887 2977 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36580->10.0.0.2:2379: read: connection timed out" Apr 30 01:27:13.437321 containerd[1596]: time="2025-04-30T01:27:13.435741970Z" level=info msg="shim disconnected" id=837738ff957ff8201da680dfebc4f5435e2c365d91f43c4e953505df322783fd namespace=k8s.io Apr 30 01:27:13.437321 containerd[1596]: time="2025-04-30T01:27:13.437319062Z" level=warning msg="cleaning up after shim disconnected" id=837738ff957ff8201da680dfebc4f5435e2c365d91f43c4e953505df322783fd namespace=k8s.io Apr 30 01:27:13.438451 containerd[1596]: time="2025-04-30T01:27:13.437335783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:27:13.439443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-837738ff957ff8201da680dfebc4f5435e2c365d91f43c4e953505df322783fd-rootfs.mount: Deactivated successfully. Apr 30 01:27:13.509563 kubelet[2977]: I0430 01:27:13.509497 2977 scope.go:117] "RemoveContainer" containerID="837738ff957ff8201da680dfebc4f5435e2c365d91f43c4e953505df322783fd" Apr 30 01:27:13.513816 containerd[1596]: time="2025-04-30T01:27:13.513561798Z" level=info msg="CreateContainer within sandbox \"48a9ce14a9a0c0ed1202278e9a22bddefb92807358bd05ed654ebb7f7711baad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 01:27:13.529917 containerd[1596]: time="2025-04-30T01:27:13.529856491Z" level=info msg="CreateContainer within sandbox \"48a9ce14a9a0c0ed1202278e9a22bddefb92807358bd05ed654ebb7f7711baad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1f60d52b63e6f6f3bf86b258af5c76cd866b9696db7e1114a6d2a83d3c6b0e08\"" Apr 30 01:27:13.530824 containerd[1596]: time="2025-04-30T01:27:13.530691479Z" level=info msg="StartContainer for \"1f60d52b63e6f6f3bf86b258af5c76cd866b9696db7e1114a6d2a83d3c6b0e08\"" Apr 30 01:27:13.602505 containerd[1596]: time="2025-04-30T01:27:13.602455028Z" level=info msg="StartContainer for \"1f60d52b63e6f6f3bf86b258af5c76cd866b9696db7e1114a6d2a83d3c6b0e08\" returns successfully" Apr 30 01:27:14.114965 containerd[1596]: time="2025-04-30T01:27:14.114885154Z" level=info msg="shim disconnected" id=4f88d9cf20ffdd6334426d34104c315811132aa495f8d6033074fd8da124879d namespace=k8s.io Apr 30 01:27:14.114965 containerd[1596]: time="2025-04-30T01:27:14.114960117Z" level=warning msg="cleaning up after shim disconnected" id=4f88d9cf20ffdd6334426d34104c315811132aa495f8d6033074fd8da124879d namespace=k8s.io Apr 30 01:27:14.114965 containerd[1596]: time="2025-04-30T01:27:14.114973317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:27:14.414608 containerd[1596]: time="2025-04-30T01:27:14.414419697Z" level=info msg="shim disconnected" id=4ff1553d0be0c0ab4595f9a3576ce1ce1ab40a5ec1f1917d3903ce1148e389d9 namespace=k8s.io Apr 30 01:27:14.414608 containerd[1596]: time="2025-04-30T01:27:14.414530981Z" level=warning msg="cleaning up after shim disconnected" id=4ff1553d0be0c0ab4595f9a3576ce1ce1ab40a5ec1f1917d3903ce1148e389d9 namespace=k8s.io Apr 30 01:27:14.414608 containerd[1596]: time="2025-04-30T01:27:14.414543021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:27:14.440720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ff1553d0be0c0ab4595f9a3576ce1ce1ab40a5ec1f1917d3903ce1148e389d9-rootfs.mount: Deactivated successfully. Apr 30 01:27:14.440898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f88d9cf20ffdd6334426d34104c315811132aa495f8d6033074fd8da124879d-rootfs.mount: Deactivated successfully. Apr 30 01:27:14.517978 kubelet[2977]: I0430 01:27:14.517884 2977 scope.go:117] "RemoveContainer" containerID="4f88d9cf20ffdd6334426d34104c315811132aa495f8d6033074fd8da124879d" Apr 30 01:27:14.526474 containerd[1596]: time="2025-04-30T01:27:14.525972180Z" level=info msg="CreateContainer within sandbox \"034cd43b037ccf7bb04cfd272368f2088e00cf7f518b8f7f2391dbb872857945\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 01:27:14.528655 kubelet[2977]: I0430 01:27:14.528037 2977 scope.go:117] "RemoveContainer" containerID="4ff1553d0be0c0ab4595f9a3576ce1ce1ab40a5ec1f1917d3903ce1148e389d9" Apr 30 01:27:14.547572 containerd[1596]: time="2025-04-30T01:27:14.546962986Z" level=info msg="CreateContainer within sandbox \"aafa1b936a773294918eee6366a0e47c956acc47b348f8bf7521f9440924a133\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 01:27:14.551073 containerd[1596]: time="2025-04-30T01:27:14.549251461Z" level=info msg="CreateContainer within sandbox \"034cd43b037ccf7bb04cfd272368f2088e00cf7f518b8f7f2391dbb872857945\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f4afcce91e316e07f281d208f61fad6e40ab49010eb5e462c80a1ae397264bb1\"" Apr 30 01:27:14.552171 containerd[1596]: time="2025-04-30T01:27:14.552053912Z" level=info msg="StartContainer for \"f4afcce91e316e07f281d208f61fad6e40ab49010eb5e462c80a1ae397264bb1\"" Apr 30 01:27:14.570491 containerd[1596]: time="2025-04-30T01:27:14.570129783Z" level=info msg="CreateContainer within sandbox \"aafa1b936a773294918eee6366a0e47c956acc47b348f8bf7521f9440924a133\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0de1886a06421a537545c073e651db6aa8b27cb33c94d625d1f9f645a9aed491\"" Apr 30 01:27:14.572226 containerd[1596]: time="2025-04-30T01:27:14.571477227Z" level=info msg="StartContainer for \"0de1886a06421a537545c073e651db6aa8b27cb33c94d625d1f9f645a9aed491\"" Apr 30 01:27:14.684118 containerd[1596]: time="2025-04-30T01:27:14.683971141Z" level=info msg="StartContainer for \"f4afcce91e316e07f281d208f61fad6e40ab49010eb5e462c80a1ae397264bb1\" returns successfully" Apr 30 01:27:14.728183 containerd[1596]: time="2025-04-30T01:27:14.727340557Z" level=info msg="StartContainer for \"0de1886a06421a537545c073e651db6aa8b27cb33c94d625d1f9f645a9aed491\" returns successfully" Apr 30 01:27:15.581871 kubelet[2977]: E0430 01:27:15.581649 2977 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-04-30T01:27:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-30T01:27:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-30T01:27:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-30T01:27:12Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4081-3-3-5-cafd7e9e76\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36484->10.0.0.2:2379: read: connection timed out" Apr 30 01:27:17.629077 kubelet[2977]: E0430 01:27:17.628512 2977 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36396->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-5-cafd7e9e76.183af44f57cfb61e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-5-cafd7e9e76,UID:4d07f70aa0952ca4fa7f61fec4f71b3a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-5-cafd7e9e76,},FirstTimestamp:2025-04-30 01:27:07.172312606 +0000 UTC m=+363.116005081,LastTimestamp:2025-04-30 01:27:07.172312606 +0000 UTC m=+363.116005081,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-5-cafd7e9e76,}"