Feb 13 20:06:57.914313 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:06:57.914448 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:06:57.914462 kernel: KASLR enabled Feb 13 20:06:57.914468 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 20:06:57.914474 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Feb 13 20:06:57.914481 kernel: random: crng init done Feb 13 20:06:57.914488 kernel: ACPI: Early table checksum verification disabled Feb 13 20:06:57.914494 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 20:06:57.914501 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:06:57.914509 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914515 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914521 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914527 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914558 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914567 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914576 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914583 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914590 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:57.914597 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 20:06:57.914604 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 20:06:57.914611 kernel: NUMA: Failed to initialise from firmware Feb 13 20:06:57.914618 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 20:06:57.914624 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 20:06:57.914631 kernel: Zone ranges: Feb 13 20:06:57.914637 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 20:06:57.914645 kernel: DMA32 empty Feb 13 20:06:57.914652 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 20:06:57.914659 kernel: Movable zone start for each node Feb 13 20:06:57.914665 kernel: Early memory node ranges Feb 13 20:06:57.914672 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Feb 13 20:06:57.914678 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 20:06:57.914685 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 20:06:57.914692 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 20:06:57.914699 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 20:06:57.914705 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 20:06:57.914712 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 20:06:57.914719 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 20:06:57.914727 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 20:06:57.914733 kernel: psci: probing for conduit method from ACPI. Feb 13 20:06:57.914740 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:06:57.914749 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:06:57.914760 kernel: psci: Trusted OS migration not required Feb 13 20:06:57.914768 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:06:57.914778 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:06:57.914786 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:06:57.914794 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:06:57.914801 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 20:06:57.914808 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:06:57.914815 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:06:57.914822 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:06:57.914829 kernel: CPU features: detected: Spectre-v4 Feb 13 20:06:57.914836 kernel: CPU features: detected: Spectre-BHB Feb 13 20:06:57.914843 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:06:57.914851 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:06:57.914859 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:06:57.914866 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:06:57.914873 kernel: alternatives: applying boot alternatives Feb 13 20:06:57.914881 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:06:57.914889 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:06:57.914897 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:06:57.914904 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:06:57.914911 kernel: Fallback order for Node 0: 0 Feb 13 20:06:57.914932 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 20:06:57.914940 kernel: Policy zone: Normal Feb 13 20:06:57.914950 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:06:57.914957 kernel: software IO TLB: area num 2. Feb 13 20:06:57.914964 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 20:06:57.914971 kernel: Memory: 3882936K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 213064K reserved, 0K cma-reserved) Feb 13 20:06:57.914979 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:06:57.914986 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:06:57.914994 kernel: rcu: RCU event tracing is enabled. Feb 13 20:06:57.915001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:06:57.915009 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:06:57.915016 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:06:57.915023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:06:57.915032 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:06:57.915039 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:06:57.915046 kernel: GICv3: 256 SPIs implemented Feb 13 20:06:57.915053 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:06:57.915060 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:06:57.915067 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:06:57.915074 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:06:57.915081 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:06:57.915088 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:06:57.915096 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:06:57.915103 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 20:06:57.915111 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 20:06:57.915120 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:06:57.915128 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:06:57.915135 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:06:57.915142 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:06:57.915149 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:06:57.915196 kernel: Console: colour dummy device 80x25 Feb 13 20:06:57.915204 kernel: ACPI: Core revision 20230628 Feb 13 20:06:57.915212 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:06:57.915219 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:06:57.915337 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:06:57.915354 kernel: landlock: Up and running. Feb 13 20:06:57.915361 kernel: SELinux: Initializing. Feb 13 20:06:57.915369 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:06:57.915376 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:06:57.915384 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:06:57.915391 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:06:57.915399 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:06:57.915406 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:06:57.915414 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:06:57.915423 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:06:57.915430 kernel: Remapping and enabling EFI services. Feb 13 20:06:57.915438 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:06:57.915445 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:06:57.915452 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:06:57.915460 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 20:06:57.915467 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:06:57.915474 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:06:57.915482 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:06:57.915488 kernel: SMP: Total of 2 processors activated. Feb 13 20:06:57.915497 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:06:57.915505 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:06:57.915518 kernel: CPU features: detected: Common not Private translations Feb 13 20:06:57.915527 kernel: CPU features: detected: CRC32 instructions Feb 13 20:06:57.915563 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:06:57.915572 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:06:57.915579 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:06:57.915587 kernel: CPU features: detected: Privileged Access Never Feb 13 20:06:57.915595 kernel: CPU features: detected: RAS Extension Support Feb 13 20:06:57.915605 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:06:57.915613 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:06:57.915623 kernel: alternatives: applying system-wide alternatives Feb 13 20:06:57.915631 kernel: devtmpfs: initialized Feb 13 20:06:57.915639 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:06:57.915647 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:06:57.915654 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:06:57.915664 kernel: SMBIOS 3.0.0 present. Feb 13 20:06:57.915672 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 20:06:57.915679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:06:57.915687 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:06:57.915695 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:06:57.915703 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:06:57.915711 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:06:57.915718 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Feb 13 20:06:57.915726 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:06:57.915735 kernel: cpuidle: using governor menu Feb 13 20:06:57.915743 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:06:57.915751 kernel: ASID allocator initialised with 32768 entries Feb 13 20:06:57.915758 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:06:57.915766 kernel: Serial: AMBA PL011 UART driver Feb 13 20:06:57.915774 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:06:57.915782 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:06:57.915790 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:06:57.915798 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:06:57.915807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:06:57.915815 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:06:57.915823 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:06:57.915831 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:06:57.915838 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:06:57.915846 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:06:57.915854 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:06:57.915862 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:06:57.915870 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:06:57.915879 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:06:57.915886 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:06:57.915894 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:06:57.915903 kernel: ACPI: Interpreter enabled Feb 13 20:06:57.915910 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:06:57.915927 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:06:57.915935 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:06:57.915943 kernel: printk: console [ttyAMA0] enabled Feb 13 20:06:57.915951 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:06:57.916104 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:06:57.916193 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:06:57.916265 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:06:57.916334 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:06:57.916403 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:06:57.916414 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:06:57.916422 kernel: PCI host bridge to bus 0000:00 Feb 13 20:06:57.916500 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:06:57.916586 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:06:57.916652 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:06:57.916715 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:06:57.916800 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:06:57.916882 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 20:06:57.917162 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 20:06:57.917269 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 20:06:57.917354 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.917442 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 20:06:57.917556 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.917636 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 20:06:57.917728 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.917805 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 20:06:57.917888 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.917974 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 20:06:57.918204 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.918290 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 20:06:57.918374 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.918503 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 20:06:57.918632 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.918718 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 20:06:57.918797 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.918866 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 20:06:57.919161 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:57.919255 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 20:06:57.919359 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 20:06:57.919435 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 20:06:57.919518 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 20:06:57.919617 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 20:06:57.920029 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:06:57.920280 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 20:06:57.920382 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 20:06:57.920458 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 20:06:57.920574 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 20:06:57.920665 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 20:06:57.920829 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 20:06:57.921074 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 20:06:57.921175 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 20:06:57.921405 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 20:06:57.921818 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 20:06:57.922078 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 20:06:57.922174 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 20:06:57.922248 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 20:06:57.922328 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 20:06:57.922411 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 20:06:57.922500 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 20:06:57.923775 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 20:06:57.924073 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 20:06:57.924277 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 20:06:57.924363 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 20:06:57.924443 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 20:06:57.924519 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 20:06:57.924678 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 20:06:57.924751 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 20:06:57.925023 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 20:06:57.925106 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 20:06:57.925179 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 20:06:57.925391 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 20:06:57.925482 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 20:06:57.925634 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 20:06:57.925710 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 20:06:57.925776 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 20:06:57.925968 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 20:06:57.926047 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 20:06:57.926136 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 20:06:57.926210 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 20:06:57.926281 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 20:06:57.926348 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 20:06:57.926413 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 20:06:57.926483 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 20:06:57.926658 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 20:06:57.926737 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 20:06:57.926815 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 20:06:57.926884 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 20:06:57.927730 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 20:06:57.927807 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 20:06:57.927878 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:06:57.927971 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 20:06:57.928043 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:06:57.928190 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 20:06:57.928272 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:06:57.928346 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 20:06:57.928418 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:06:57.929640 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 20:06:57.929788 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:06:57.929873 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 20:06:57.930269 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:06:57.930360 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 20:06:57.930433 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:06:57.930507 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 20:06:57.930596 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:06:57.930676 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 20:06:57.930746 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:06:57.931002 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 20:06:57.931091 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 20:06:57.931170 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 20:06:57.931243 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 20:06:57.931328 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 20:06:57.933707 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 20:06:57.933822 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 20:06:57.933904 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 20:06:57.933999 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 20:06:57.934075 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 20:06:57.934148 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 20:06:57.934221 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 20:06:57.934296 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 20:06:57.934368 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 20:06:57.934483 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 20:06:57.934581 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 20:06:57.934664 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 20:06:57.934735 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 20:06:57.934809 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 20:06:57.934880 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 20:06:57.935002 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 20:06:57.935087 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 20:06:57.935162 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:06:57.935241 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 20:06:57.935318 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 20:06:57.935404 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 20:06:57.936521 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 20:06:57.936707 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:06:57.937823 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 20:06:57.937934 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 20:06:57.938017 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 20:06:57.938088 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 20:06:57.938158 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:06:57.938237 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 20:06:57.938311 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 20:06:57.938387 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 20:06:57.938481 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 20:06:57.938569 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 20:06:57.938643 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:06:57.938726 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 20:06:57.938804 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 20:06:57.938878 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 20:06:57.938963 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 20:06:57.939035 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:06:57.939118 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 20:06:57.939191 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 20:06:57.939264 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 20:06:57.939334 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 20:06:57.939408 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 20:06:57.939477 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:06:57.942234 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 20:06:57.942349 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 20:06:57.942425 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 20:06:57.942496 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 20:06:57.942592 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 20:06:57.942695 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:06:57.942776 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 20:06:57.942849 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 20:06:57.942962 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 20:06:57.943055 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 20:06:57.943127 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 20:06:57.943195 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 20:06:57.943261 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:06:57.943332 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 20:06:57.943402 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 20:06:57.943469 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 20:06:57.943571 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:06:57.943651 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 20:06:57.943723 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 20:06:57.943793 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 20:06:57.943862 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:06:57.943946 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:06:57.944010 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:06:57.944120 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:06:57.944205 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 20:06:57.944278 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 20:06:57.944348 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:06:57.944427 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 20:06:57.944498 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 20:06:57.945503 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:06:57.945761 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 20:06:57.945837 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 20:06:57.945901 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:06:57.946062 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 20:06:57.946131 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 20:06:57.946258 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:06:57.946336 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 20:06:57.946398 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 20:06:57.946462 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:06:57.946531 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 20:06:57.946615 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 20:06:57.946681 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:06:57.946755 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 20:06:57.946817 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 20:06:57.946877 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:06:57.946962 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 20:06:57.947025 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 20:06:57.947087 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:06:57.947204 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 20:06:57.947280 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 20:06:57.947342 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:06:57.947353 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:06:57.947361 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:06:57.947370 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:06:57.947377 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:06:57.947386 kernel: iommu: Default domain type: Translated Feb 13 20:06:57.947394 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:06:57.947404 kernel: efivars: Registered efivars operations Feb 13 20:06:57.947411 kernel: vgaarb: loaded Feb 13 20:06:57.947419 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:06:57.947427 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:06:57.947435 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:06:57.947443 kernel: pnp: PnP ACPI init Feb 13 20:06:57.947529 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:06:57.947636 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:06:57.947649 kernel: NET: Registered PF_INET protocol family Feb 13 20:06:57.947657 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:06:57.947665 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:06:57.947673 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:06:57.947681 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:06:57.947689 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:06:57.947699 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:06:57.947707 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:06:57.947715 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:06:57.947725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:06:57.947815 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 20:06:57.947828 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:06:57.947836 kernel: kvm [1]: HYP mode not available Feb 13 20:06:57.947844 kernel: Initialise system trusted keyrings Feb 13 20:06:57.947852 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:06:57.947860 kernel: Key type asymmetric registered Feb 13 20:06:57.947868 kernel: Asymmetric key parser 'x509' registered Feb 13 20:06:57.947877 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:06:57.947893 kernel: io scheduler mq-deadline registered Feb 13 20:06:57.947901 kernel: io scheduler kyber registered Feb 13 20:06:57.947909 kernel: io scheduler bfq registered Feb 13 20:06:57.948085 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 20:06:57.948199 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 20:06:57.948272 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 20:06:57.948406 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.948482 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 20:06:57.948579 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 20:06:57.948650 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.948720 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 20:06:57.948788 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 20:06:57.948856 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.948976 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 20:06:57.949061 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 20:06:57.949130 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.949201 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 20:06:57.949300 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 20:06:57.949378 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.949452 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 20:06:57.949525 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 20:06:57.949657 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.949740 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 20:06:57.949809 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 20:06:57.949876 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.949971 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 20:06:57.950046 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 20:06:57.950114 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.950125 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 20:06:57.950193 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 20:06:57.950266 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 20:06:57.950339 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:57.950353 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:06:57.950361 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:06:57.950369 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:06:57.950444 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 20:06:57.950520 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 20:06:57.950675 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:06:57.950691 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 20:06:57.950785 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 20:06:57.950803 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 20:06:57.950811 kernel: thunder_xcv, ver 1.0 Feb 13 20:06:57.950818 kernel: thunder_bgx, ver 1.0 Feb 13 20:06:57.950826 kernel: nicpf, ver 1.0 Feb 13 20:06:57.950834 kernel: nicvf, ver 1.0 Feb 13 20:06:57.950914 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:06:57.951036 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:06:57 UTC (1739477217) Feb 13 20:06:57.951048 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:06:57.951061 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:06:57.951069 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:06:57.951077 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:06:57.951085 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:06:57.951093 kernel: Segment Routing with IPv6 Feb 13 20:06:57.951101 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:06:57.951109 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:06:57.951117 kernel: Key type dns_resolver registered Feb 13 20:06:57.951125 kernel: registered taskstats version 1 Feb 13 20:06:57.951135 kernel: Loading compiled-in X.509 certificates Feb 13 20:06:57.951143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:06:57.951150 kernel: Key type .fscrypt registered Feb 13 20:06:57.951158 kernel: Key type fscrypt-provisioning registered Feb 13 20:06:57.951166 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:06:57.951174 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:06:57.951182 kernel: ima: No architecture policies found Feb 13 20:06:57.951190 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:06:57.951198 kernel: clk: Disabling unused clocks Feb 13 20:06:57.951208 kernel: Freeing unused kernel memory: 39360K Feb 13 20:06:57.951216 kernel: Run /init as init process Feb 13 20:06:57.951223 kernel: with arguments: Feb 13 20:06:57.951231 kernel: /init Feb 13 20:06:57.951239 kernel: with environment: Feb 13 20:06:57.951246 kernel: HOME=/ Feb 13 20:06:57.951254 kernel: TERM=linux Feb 13 20:06:57.951276 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:06:57.951287 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:06:57.951300 systemd[1]: Detected virtualization kvm. Feb 13 20:06:57.951309 systemd[1]: Detected architecture arm64. Feb 13 20:06:57.951317 systemd[1]: Running in initrd. Feb 13 20:06:57.951325 systemd[1]: No hostname configured, using default hostname. Feb 13 20:06:57.951332 systemd[1]: Hostname set to . Feb 13 20:06:57.951341 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:06:57.951351 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:06:57.951361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:06:57.951370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:06:57.951379 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:06:57.951387 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:06:57.951396 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:06:57.951404 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:06:57.951414 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:06:57.951424 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:06:57.951433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:06:57.951441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:06:57.951449 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:06:57.951458 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:06:57.951466 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:06:57.951474 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:06:57.951483 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:06:57.951493 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:06:57.951502 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:06:57.951510 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:06:57.951518 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:06:57.951527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:06:57.951548 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:06:57.951557 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:06:57.951566 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:06:57.951574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:06:57.951585 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:06:57.951593 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:06:57.951602 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:06:57.951610 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:06:57.951619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:06:57.951627 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:06:57.951787 systemd-journald[235]: Collecting audit messages is disabled. Feb 13 20:06:57.951819 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:06:57.951828 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:06:57.951837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:06:57.951847 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:57.951857 systemd-journald[235]: Journal started Feb 13 20:06:57.951877 systemd-journald[235]: Runtime Journal (/run/log/journal/966a7eec80144118b55c72dd1775c52d) is 8.0M, max 76.6M, 68.6M free. Feb 13 20:06:57.941399 systemd-modules-load[237]: Inserted module 'overlay' Feb 13 20:06:57.962622 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:06:57.964060 systemd-modules-load[237]: Inserted module 'br_netfilter' Feb 13 20:06:57.965294 kernel: Bridge firewalling registered Feb 13 20:06:57.969859 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:06:57.969990 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:06:57.971589 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:06:57.974898 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:06:57.984139 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:06:57.988250 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:06:57.991821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:06:57.992656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:06:58.005894 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:06:58.008593 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:06:58.010261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:06:58.018143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:06:58.026573 dracut-cmdline[267]: dracut-dracut-053 Feb 13 20:06:58.032685 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:06:58.027853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:06:58.064869 systemd-resolved[277]: Positive Trust Anchors: Feb 13 20:06:58.065770 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:06:58.065836 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:06:58.076779 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 13 20:06:58.077981 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:06:58.078689 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:06:58.112689 kernel: SCSI subsystem initialized Feb 13 20:06:58.118622 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:06:58.129705 kernel: iscsi: registered transport (tcp) Feb 13 20:06:58.144599 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:06:58.144665 kernel: QLogic iSCSI HBA Driver Feb 13 20:06:58.200654 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:06:58.206746 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:06:58.236619 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:06:58.236690 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:06:58.236705 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:06:58.300594 kernel: raid6: neonx8 gen() 15339 MB/s Feb 13 20:06:58.318945 kernel: raid6: neonx4 gen() 14734 MB/s Feb 13 20:06:58.335599 kernel: raid6: neonx2 gen() 12544 MB/s Feb 13 20:06:58.352609 kernel: raid6: neonx1 gen() 10249 MB/s Feb 13 20:06:58.369639 kernel: raid6: int64x8 gen() 6788 MB/s Feb 13 20:06:58.386585 kernel: raid6: int64x4 gen() 7215 MB/s Feb 13 20:06:58.403630 kernel: raid6: int64x2 gen() 5964 MB/s Feb 13 20:06:58.420588 kernel: raid6: int64x1 gen() 4930 MB/s Feb 13 20:06:58.420664 kernel: raid6: using algorithm neonx8 gen() 15339 MB/s Feb 13 20:06:58.437591 kernel: raid6: .... xor() 11515 MB/s, rmw enabled Feb 13 20:06:58.437661 kernel: raid6: using neon recovery algorithm Feb 13 20:06:58.443698 kernel: xor: measuring software checksum speed Feb 13 20:06:58.443784 kernel: 8regs : 17370 MB/sec Feb 13 20:06:58.443800 kernel: 32regs : 16004 MB/sec Feb 13 20:06:58.443814 kernel: arm64_neon : 26927 MB/sec Feb 13 20:06:58.444573 kernel: xor: using function: arm64_neon (26927 MB/sec) Feb 13 20:06:58.495593 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:06:58.510859 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:06:58.518842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:06:58.535135 systemd-udevd[456]: Using default interface naming scheme 'v255'. Feb 13 20:06:58.538763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:06:58.550792 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:06:58.566293 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Feb 13 20:06:58.606773 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:06:58.612829 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:06:58.683241 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:06:58.690863 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:06:58.713650 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:06:58.715202 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:06:58.716132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:06:58.717224 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:06:58.723840 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:06:58.756592 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:06:58.787670 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:06:58.797105 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 20:06:58.797192 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 20:06:58.826399 kernel: ACPI: bus type USB registered Feb 13 20:06:58.826450 kernel: usbcore: registered new interface driver usbfs Feb 13 20:06:58.826472 kernel: usbcore: registered new interface driver hub Feb 13 20:06:58.826483 kernel: usbcore: registered new device driver usb Feb 13 20:06:58.827249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:06:58.829865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:06:58.830774 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:06:58.831356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:06:58.831527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:58.836955 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:06:58.849069 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 20:06:58.874853 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 20:06:58.875066 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 20:06:58.875264 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 20:06:58.875433 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 20:06:58.875531 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 20:06:58.875642 kernel: hub 1-0:1.0: USB hub found Feb 13 20:06:58.875807 kernel: hub 1-0:1.0: 4 ports detected Feb 13 20:06:58.875901 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 20:06:58.876053 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 20:06:58.876151 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 20:06:58.876248 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 20:06:58.876337 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:06:58.876424 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 20:06:58.876527 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 20:06:58.879727 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 20:06:58.879875 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:06:58.879897 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:06:58.879908 kernel: GPT:17805311 != 80003071 Feb 13 20:06:58.879931 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:06:58.879942 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 20:06:58.880084 kernel: GPT:17805311 != 80003071 Feb 13 20:06:58.880158 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:06:58.880174 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:06:58.880185 kernel: hub 2-0:1.0: USB hub found Feb 13 20:06:58.880315 kernel: hub 2-0:1.0: 4 ports detected Feb 13 20:06:58.880406 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 20:06:58.848312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:06:58.879531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:58.889758 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:06:58.921146 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:06:58.951610 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (520) Feb 13 20:06:58.959620 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (503) Feb 13 20:06:58.960528 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 20:06:58.967738 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 20:06:58.977677 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 20:06:58.986461 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 20:06:58.987263 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 20:06:58.996804 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:06:59.015966 disk-uuid[573]: Primary Header is updated. Feb 13 20:06:59.015966 disk-uuid[573]: Secondary Entries is updated. Feb 13 20:06:59.015966 disk-uuid[573]: Secondary Header is updated. Feb 13 20:06:59.024563 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:06:59.031580 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:06:59.035565 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:06:59.110566 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 20:06:59.354596 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 20:06:59.503173 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 20:06:59.503245 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 20:06:59.505590 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 20:06:59.562130 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 20:06:59.562334 kernel: usbcore: registered new interface driver usbhid Feb 13 20:06:59.562346 kernel: usbhid: USB HID core driver Feb 13 20:07:00.040287 disk-uuid[574]: The operation has completed successfully. Feb 13 20:07:00.041058 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:07:00.103665 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:07:00.104754 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:07:00.117863 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:07:00.124437 sh[591]: Success Feb 13 20:07:00.138212 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:07:00.195069 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:07:00.204072 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:07:00.206573 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:07:00.232786 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:07:00.232866 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:07:00.232885 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:07:00.232901 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:07:00.234562 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:07:00.244690 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:07:00.248177 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:07:00.249023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:07:00.255863 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:07:00.260791 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:07:00.271059 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:07:00.271122 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:07:00.271134 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:07:00.276566 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:07:00.276640 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:07:00.289633 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:07:00.290933 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:07:00.299986 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:07:00.304776 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:07:00.410147 ignition[681]: Ignition 2.19.0 Feb 13 20:07:00.410157 ignition[681]: Stage: fetch-offline Feb 13 20:07:00.410198 ignition[681]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:07:00.410207 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:07:00.410372 ignition[681]: parsed url from cmdline: "" Feb 13 20:07:00.410376 ignition[681]: no config URL provided Feb 13 20:07:00.410380 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:07:00.414770 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:07:00.410387 ignition[681]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:07:00.410393 ignition[681]: failed to fetch config: resource requires networking Feb 13 20:07:00.410639 ignition[681]: Ignition finished successfully Feb 13 20:07:00.429068 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:07:00.437849 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:07:00.460761 systemd-networkd[780]: lo: Link UP Feb 13 20:07:00.460772 systemd-networkd[780]: lo: Gained carrier Feb 13 20:07:00.462438 systemd-networkd[780]: Enumeration completed Feb 13 20:07:00.462721 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:07:00.464154 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:00.464157 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:07:00.465036 systemd[1]: Reached target network.target - Network. Feb 13 20:07:00.466008 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:00.466011 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:07:00.466648 systemd-networkd[780]: eth0: Link UP Feb 13 20:07:00.466651 systemd-networkd[780]: eth0: Gained carrier Feb 13 20:07:00.466659 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:00.478170 systemd-networkd[780]: eth1: Link UP Feb 13 20:07:00.478179 systemd-networkd[780]: eth1: Gained carrier Feb 13 20:07:00.478190 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:00.478593 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:07:00.497072 ignition[783]: Ignition 2.19.0 Feb 13 20:07:00.497085 ignition[783]: Stage: fetch Feb 13 20:07:00.497437 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:07:00.497449 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:07:00.499203 ignition[783]: parsed url from cmdline: "" Feb 13 20:07:00.499208 ignition[783]: no config URL provided Feb 13 20:07:00.499218 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:07:00.499232 ignition[783]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:07:00.499265 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 20:07:00.500010 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 20:07:00.504677 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:07:00.536688 systemd-networkd[780]: eth0: DHCPv4 address 78.47.136.246/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 20:07:00.701275 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 20:07:00.708157 ignition[783]: GET result: OK Feb 13 20:07:00.708260 ignition[783]: parsing config with SHA512: aa00ed347fefa5039a62a9fcd4f2cbcfe270d124de1a9f9b96711959930a3e3553f94a1ce35fe88c5b6664f5d341f1771abd8f77fea648f674f6c68394b85488 Feb 13 20:07:00.712922 unknown[783]: fetched base config from "system" Feb 13 20:07:00.712934 unknown[783]: fetched base config from "system" Feb 13 20:07:00.713360 ignition[783]: fetch: fetch complete Feb 13 20:07:00.712940 unknown[783]: fetched user config from "hetzner" Feb 13 20:07:00.713365 ignition[783]: fetch: fetch passed Feb 13 20:07:00.715521 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:07:00.713411 ignition[783]: Ignition finished successfully Feb 13 20:07:00.730797 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:07:00.744137 ignition[791]: Ignition 2.19.0 Feb 13 20:07:00.744148 ignition[791]: Stage: kargs Feb 13 20:07:00.744386 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:07:00.744396 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:07:00.745565 ignition[791]: kargs: kargs passed Feb 13 20:07:00.745627 ignition[791]: Ignition finished successfully Feb 13 20:07:00.748455 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:07:00.755845 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:07:00.770103 ignition[797]: Ignition 2.19.0 Feb 13 20:07:00.770116 ignition[797]: Stage: disks Feb 13 20:07:00.770320 ignition[797]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:07:00.770331 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:07:00.772791 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:07:00.771488 ignition[797]: disks: disks passed Feb 13 20:07:00.771568 ignition[797]: Ignition finished successfully Feb 13 20:07:00.774349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:07:00.775805 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:07:00.776704 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:07:00.777651 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:07:00.778717 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:07:00.794632 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:07:00.813580 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:07:00.821021 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:07:00.830706 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:07:00.900639 kernel: EXT4-fs (sda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:07:00.901821 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:07:00.903236 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:07:00.909723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:07:00.913688 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:07:00.920725 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:07:00.921320 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:07:00.921353 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:07:00.931992 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:07:00.933307 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (813) Feb 13 20:07:00.937165 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:07:00.937312 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:07:00.937359 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:07:00.941561 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:07:00.941603 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:07:00.943621 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:07:00.951868 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:07:01.006675 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:07:01.014966 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:07:01.020684 coreos-metadata[815]: Feb 13 20:07:01.020 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 20:07:01.022660 coreos-metadata[815]: Feb 13 20:07:01.022 INFO Fetch successful Feb 13 20:07:01.023248 coreos-metadata[815]: Feb 13 20:07:01.023 INFO wrote hostname ci-4081-3-1-c-c4549fc0d2 to /sysroot/etc/hostname Feb 13 20:07:01.023990 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:07:01.026679 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:07:01.031580 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:07:01.155355 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:07:01.163745 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:07:01.168932 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:07:01.178642 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:07:01.203617 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:07:01.209133 ignition[930]: INFO : Ignition 2.19.0 Feb 13 20:07:01.209133 ignition[930]: INFO : Stage: mount Feb 13 20:07:01.210416 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:07:01.210416 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:07:01.210416 ignition[930]: INFO : mount: mount passed Feb 13 20:07:01.214293 ignition[930]: INFO : Ignition finished successfully Feb 13 20:07:01.213128 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:07:01.218690 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:07:01.232201 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:07:01.239899 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:07:01.253580 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Feb 13 20:07:01.254993 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:07:01.255052 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:07:01.255068 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:07:01.259583 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:07:01.259674 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:07:01.264273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:07:01.293242 ignition[959]: INFO : Ignition 2.19.0 Feb 13 20:07:01.293242 ignition[959]: INFO : Stage: files Feb 13 20:07:01.294592 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:07:01.294592 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:07:01.294592 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:07:01.297075 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:07:01.297075 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:07:01.299870 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:07:01.300727 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:07:01.302218 unknown[959]: wrote ssh authorized keys file for user: core Feb 13 20:07:01.303746 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:07:01.304787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:07:01.305998 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 20:07:01.451356 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:07:01.785764 systemd-networkd[780]: eth1: Gained IPv6LL Feb 13 20:07:02.234061 systemd-networkd[780]: eth0: Gained IPv6LL Feb 13 20:07:03.062590 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:07:03.062590 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:07:03.062590 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:07:03.062590 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:07:03.062590 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:07:03.062590 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:07:03.062590 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:07:03.062590 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:07:03.073521 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:07:03.073521 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:07:03.073521 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:07:03.073521 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:07:03.073521 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:07:03.073521 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:07:03.073521 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 20:07:03.356504 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:07:03.708333 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:07:03.708333 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:07:03.711833 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:07:03.711833 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:07:03.711833 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:07:03.711833 ignition[959]: INFO : files: files passed Feb 13 20:07:03.711833 ignition[959]: INFO : Ignition finished successfully Feb 13 20:07:03.713775 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:07:03.721871 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:07:03.725803 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:07:03.744764 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:07:03.744978 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:07:03.756453 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:07:03.756453 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:07:03.760142 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:07:03.763610 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:07:03.764636 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:07:03.771807 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:07:03.806801 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:07:03.807841 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:07:03.809143 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:07:03.809864 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:07:03.811525 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:07:03.815797 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:07:03.833691 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:07:03.842916 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:07:03.857331 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:07:03.858879 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:07:03.860471 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:07:03.861782 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:07:03.861956 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:07:03.863689 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:07:03.864392 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:07:03.865412 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:07:03.866329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:07:03.867533 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:07:03.868749 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:07:03.869686 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:07:03.870922 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:07:03.872021 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:07:03.873008 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:07:03.873814 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:07:03.874037 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:07:03.875223 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:07:03.876229 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:07:03.877161 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:07:03.878064 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:07:03.878804 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:07:03.878987 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:07:03.880508 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:07:03.880668 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:07:03.881817 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:07:03.881969 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:07:03.882868 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:07:03.882990 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:07:03.895121 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:07:03.897830 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:07:03.898029 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:07:03.900844 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:07:03.901375 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:07:03.901510 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:07:03.902250 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:07:03.902345 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:07:03.914355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:07:03.914455 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:07:03.921604 ignition[1011]: INFO : Ignition 2.19.0 Feb 13 20:07:03.921604 ignition[1011]: INFO : Stage: umount Feb 13 20:07:03.924017 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:07:03.924017 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:07:03.924017 ignition[1011]: INFO : umount: umount passed Feb 13 20:07:03.924017 ignition[1011]: INFO : Ignition finished successfully Feb 13 20:07:03.927000 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:07:03.927105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:07:03.928194 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:07:03.928302 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:07:03.929929 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:07:03.930021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:07:03.931197 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:07:03.931251 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:07:03.932009 systemd[1]: Stopped target network.target - Network. Feb 13 20:07:03.933568 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:07:03.933715 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:07:03.938060 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:07:03.938964 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:07:03.939247 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:07:03.940408 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:07:03.941260 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:07:03.942118 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:07:03.942170 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:07:03.944778 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:07:03.944848 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:07:03.948242 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:07:03.948346 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:07:03.949396 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:07:03.949448 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:07:03.951265 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:07:03.952677 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:07:03.956608 systemd-networkd[780]: eth0: DHCPv6 lease lost Feb 13 20:07:03.958596 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:07:03.962669 systemd-networkd[780]: eth1: DHCPv6 lease lost Feb 13 20:07:03.964754 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:07:03.964956 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:07:03.968372 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:07:03.971096 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:07:03.975460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:07:03.975528 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:07:03.983737 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:07:03.986676 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:07:03.986812 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:07:03.990510 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:07:03.990588 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:07:03.991398 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:07:03.991440 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:07:03.992284 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:07:03.992331 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:07:03.999117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:07:04.005311 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:07:04.005412 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:07:04.013592 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:07:04.013709 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:07:04.027448 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:07:04.027696 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:07:04.029821 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:07:04.029947 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:07:04.032361 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:07:04.032466 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:07:04.035202 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:07:04.035279 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:07:04.037034 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:07:04.037100 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:07:04.039000 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:07:04.039572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:07:04.051487 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:07:04.052487 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:07:04.052608 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:07:04.053728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:07:04.053796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:07:04.056175 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:07:04.056303 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:07:04.063721 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:07:04.063856 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:07:04.065204 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:07:04.075847 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:07:04.087136 systemd[1]: Switching root. Feb 13 20:07:04.123886 systemd-journald[235]: Journal stopped Feb 13 20:07:05.055469 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Feb 13 20:07:05.056650 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:07:05.056684 kernel: SELinux: policy capability open_perms=1 Feb 13 20:07:05.056694 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:07:05.056703 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:07:05.056713 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:07:05.056722 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:07:05.056732 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:07:05.056748 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:07:05.056767 kernel: audit: type=1403 audit(1739477224.257:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:07:05.056786 systemd[1]: Successfully loaded SELinux policy in 36.529ms. Feb 13 20:07:05.056814 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.984ms. Feb 13 20:07:05.056828 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:07:05.056839 systemd[1]: Detected virtualization kvm. Feb 13 20:07:05.056850 systemd[1]: Detected architecture arm64. Feb 13 20:07:05.056860 systemd[1]: Detected first boot. Feb 13 20:07:05.056871 systemd[1]: Hostname set to . Feb 13 20:07:05.056883 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:07:05.056909 zram_generator::config[1054]: No configuration found. Feb 13 20:07:05.056921 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:07:05.056931 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:07:05.056941 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:07:05.056952 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:07:05.056963 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:07:05.056974 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:07:05.056991 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:07:05.057002 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:07:05.057012 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:07:05.057022 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:07:05.057033 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:07:05.057043 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:07:05.057053 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:07:05.057064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:07:05.057074 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:07:05.057086 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:07:05.057101 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:07:05.057111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:07:05.057122 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:07:05.057132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:07:05.057142 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:07:05.057153 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:07:05.057169 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:07:05.057180 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:07:05.057190 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:07:05.057201 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:07:05.057211 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:07:05.057222 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:07:05.057232 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:07:05.057243 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:07:05.057254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:07:05.057265 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:07:05.057280 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:07:05.057292 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:07:05.057302 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:07:05.057312 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:07:05.057324 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:07:05.057334 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:07:05.057344 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:07:05.057357 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:07:05.057368 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:07:05.057378 systemd[1]: Reached target machines.target - Containers. Feb 13 20:07:05.057389 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:07:05.057403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:07:05.057417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:07:05.057428 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:07:05.057439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:07:05.057450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:07:05.057461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:07:05.057471 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:07:05.057491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:07:05.057504 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:07:05.057516 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:07:05.057530 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:07:05.061049 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:07:05.061075 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:07:05.061086 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:07:05.061098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:07:05.061109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:07:05.061120 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:07:05.061131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:07:05.061142 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:07:05.061161 kernel: ACPI: bus type drm_connector registered Feb 13 20:07:05.061175 systemd[1]: Stopped verity-setup.service. Feb 13 20:07:05.061186 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:07:05.061198 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:07:05.061215 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:07:05.061256 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:07:05.061272 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:07:05.061283 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:07:05.061327 systemd-journald[1121]: Collecting audit messages is disabled. Feb 13 20:07:05.061380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:07:05.061394 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:07:05.061405 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:07:05.061418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:07:05.061430 systemd-journald[1121]: Journal started Feb 13 20:07:05.061452 systemd-journald[1121]: Runtime Journal (/run/log/journal/966a7eec80144118b55c72dd1775c52d) is 8.0M, max 76.6M, 68.6M free. Feb 13 20:07:05.062152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:07:04.804087 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:07:04.827359 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:07:04.827948 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:07:05.065786 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:07:05.067187 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:07:05.068560 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:07:05.069691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:07:05.070712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:07:05.076558 kernel: fuse: init (API version 7.39) Feb 13 20:07:05.077036 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:07:05.083123 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:07:05.083557 kernel: loop: module loaded Feb 13 20:07:05.083760 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:07:05.084697 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:07:05.084864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:07:05.086986 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:07:05.088167 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:07:05.102443 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:07:05.109711 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:07:05.113806 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:07:05.116644 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:07:05.116687 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:07:05.118434 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:07:05.123775 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:07:05.133740 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:07:05.134384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:07:05.138383 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:07:05.141963 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:07:05.142627 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:07:05.145717 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:07:05.146423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:07:05.148796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:07:05.152811 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:07:05.156086 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:07:05.157841 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:07:05.158713 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:07:05.181595 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:07:05.202107 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:07:05.212448 kernel: loop0: detected capacity change from 0 to 8 Feb 13 20:07:05.214491 systemd-journald[1121]: Time spent on flushing to /var/log/journal/966a7eec80144118b55c72dd1775c52d is 25.774ms for 1128 entries. Feb 13 20:07:05.214491 systemd-journald[1121]: System Journal (/var/log/journal/966a7eec80144118b55c72dd1775c52d) is 8.0M, max 584.8M, 576.8M free. Feb 13 20:07:05.254777 systemd-journald[1121]: Received client request to flush runtime journal. Feb 13 20:07:05.254857 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:07:05.225551 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:07:05.226363 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:07:05.240912 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:07:05.263623 kernel: loop1: detected capacity change from 0 to 201592 Feb 13 20:07:05.260410 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:07:05.268190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:07:05.293384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:07:05.302791 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:07:05.306044 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:07:05.308758 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:07:05.313604 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 20:07:05.328914 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:07:05.340174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:07:05.345262 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:07:05.360570 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:07:05.395437 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 20:07:05.395950 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 20:07:05.405589 kernel: loop4: detected capacity change from 0 to 8 Feb 13 20:07:05.408973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:07:05.415813 kernel: loop5: detected capacity change from 0 to 201592 Feb 13 20:07:05.444770 kernel: loop6: detected capacity change from 0 to 114432 Feb 13 20:07:05.463592 kernel: loop7: detected capacity change from 0 to 114328 Feb 13 20:07:05.476639 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 20:07:05.477324 (sd-merge)[1195]: Merged extensions into '/usr'. Feb 13 20:07:05.486191 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:07:05.486577 systemd[1]: Reloading... Feb 13 20:07:05.608590 zram_generator::config[1218]: No configuration found. Feb 13 20:07:05.766704 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:07:05.802239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:07:05.867496 systemd[1]: Reloading finished in 380 ms. Feb 13 20:07:05.901725 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:07:05.905491 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:07:05.914868 systemd[1]: Starting ensure-sysext.service... Feb 13 20:07:05.927793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:07:05.941654 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:07:05.941703 systemd[1]: Reloading... Feb 13 20:07:05.982833 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:07:05.984865 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:07:05.987840 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:07:05.988149 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 20:07:05.988193 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 20:07:05.993277 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:07:05.994589 systemd-tmpfiles[1260]: Skipping /boot Feb 13 20:07:06.007321 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:07:06.007530 systemd-tmpfiles[1260]: Skipping /boot Feb 13 20:07:06.046583 zram_generator::config[1286]: No configuration found. Feb 13 20:07:06.149384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:07:06.196340 systemd[1]: Reloading finished in 254 ms. Feb 13 20:07:06.220666 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:07:06.228204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:07:06.245026 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:07:06.251156 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:07:06.261266 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:07:06.268964 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:07:06.271873 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:07:06.275482 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:07:06.279529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:07:06.282870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:07:06.290039 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:07:06.295915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:07:06.296529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:07:06.299576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:07:06.299739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:07:06.304562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:07:06.306834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:07:06.316209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:07:06.324764 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:07:06.327627 systemd[1]: Finished ensure-sysext.service. Feb 13 20:07:06.335564 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:07:06.340610 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:07:06.342097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:07:06.342351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:07:06.348309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:07:06.355925 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:07:06.363990 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:07:06.366617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:07:06.380838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:07:06.381048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:07:06.382273 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Feb 13 20:07:06.383638 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:07:06.383809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:07:06.388100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:07:06.392241 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:07:06.405453 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:07:06.409918 augenrules[1360]: No rules Feb 13 20:07:06.413055 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:07:06.420506 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:07:06.421403 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:07:06.422833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:07:06.430960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:07:06.441956 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:07:06.535737 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:07:06.589256 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:07:06.590010 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:07:06.608332 systemd-networkd[1370]: lo: Link UP Feb 13 20:07:06.608340 systemd-networkd[1370]: lo: Gained carrier Feb 13 20:07:06.610425 systemd-networkd[1370]: Enumeration completed Feb 13 20:07:06.610555 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:07:06.613790 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:06.613798 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:07:06.618093 systemd-networkd[1370]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:06.618707 systemd-networkd[1370]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:07:06.619040 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:07:06.620657 systemd-networkd[1370]: eth0: Link UP Feb 13 20:07:06.620774 systemd-networkd[1370]: eth0: Gained carrier Feb 13 20:07:06.620836 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:06.628084 systemd-networkd[1370]: eth1: Link UP Feb 13 20:07:06.629596 systemd-networkd[1370]: eth1: Gained carrier Feb 13 20:07:06.629628 systemd-networkd[1370]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:06.649179 systemd-resolved[1335]: Positive Trust Anchors: Feb 13 20:07:06.649198 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:07:06.649230 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:07:06.656225 systemd-resolved[1335]: Using system hostname 'ci-4081-3-1-c-c4549fc0d2'. Feb 13 20:07:06.661279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:07:06.662109 systemd[1]: Reached target network.target - Network. Feb 13 20:07:06.662767 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:07:06.670997 systemd-networkd[1370]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:07:06.672385 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Feb 13 20:07:06.683568 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:07:06.696690 systemd-networkd[1370]: eth0: DHCPv4 address 78.47.136.246/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 20:07:06.697511 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Feb 13 20:07:06.698362 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Feb 13 20:07:06.711060 systemd-networkd[1370]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:06.717249 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:07:06.725587 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1388) Feb 13 20:07:06.783306 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 20:07:06.783790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:07:06.795812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:07:06.801745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:07:06.805805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:07:06.808744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:07:06.808788 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:07:06.809184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:07:06.809330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:07:06.819822 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:07:06.822611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:07:06.823966 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:07:06.827246 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:07:06.827921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:07:06.830434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:07:06.839383 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 20:07:06.839476 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:07:06.839495 kernel: [drm] features: -context_init Feb 13 20:07:06.839509 kernel: [drm] number of scanouts: 1 Feb 13 20:07:06.839545 kernel: [drm] number of cap sets: 0 Feb 13 20:07:06.840136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 20:07:06.842568 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 20:07:06.849220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:07:06.849516 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 20:07:06.855117 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:07:06.860530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:07:06.872350 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:07:06.873486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:07:06.873696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:07:06.884769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:07:06.953335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:07:06.995320 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:07:07.010045 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:07:07.022296 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:07:07.052759 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:07:07.056707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:07:07.057632 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:07:07.058579 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:07:07.059512 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:07:07.060718 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:07:07.061357 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:07:07.062111 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:07:07.062744 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:07:07.062779 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:07:07.063260 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:07:07.066631 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:07:07.069761 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:07:07.075664 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:07:07.078635 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:07:07.080456 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:07:07.081207 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:07:07.081710 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:07:07.082248 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:07:07.082279 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:07:07.092831 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:07:07.098965 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:07:07.100303 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:07:07.102974 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:07:07.108733 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:07:07.113206 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:07:07.113953 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:07:07.116785 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:07:07.122724 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:07:07.125928 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 20:07:07.136761 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:07:07.140358 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:07:07.146790 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:07:07.149548 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:07:07.150103 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:07:07.158958 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:07:07.165323 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:07:07.168618 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:07:07.177847 extend-filesystems[1448]: Found loop4 Feb 13 20:07:07.189061 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:07:07.209873 jq[1447]: false Feb 13 20:07:07.210054 extend-filesystems[1448]: Found loop5 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found loop6 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found loop7 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found sda Feb 13 20:07:07.210054 extend-filesystems[1448]: Found sda1 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found sda2 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found sda3 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found usr Feb 13 20:07:07.210054 extend-filesystems[1448]: Found sda4 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found sda6 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found sda7 Feb 13 20:07:07.210054 extend-filesystems[1448]: Found sda9 Feb 13 20:07:07.210054 extend-filesystems[1448]: Checking size of /dev/sda9 Feb 13 20:07:07.210054 extend-filesystems[1448]: Resized partition /dev/sda9 Feb 13 20:07:07.280646 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 20:07:07.189242 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:07:07.280976 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:07:07.240114 dbus-daemon[1446]: [system] SELinux support is enabled Feb 13 20:07:07.295664 coreos-metadata[1445]: Feb 13 20:07:07.250 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 20:07:07.295664 coreos-metadata[1445]: Feb 13 20:07:07.258 INFO Fetch successful Feb 13 20:07:07.295664 coreos-metadata[1445]: Feb 13 20:07:07.258 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 20:07:07.295664 coreos-metadata[1445]: Feb 13 20:07:07.261 INFO Fetch successful Feb 13 20:07:07.208764 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:07:07.209441 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:07:07.231965 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:07:07.298666 jq[1460]: true Feb 13 20:07:07.232163 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:07:07.304007 tar[1465]: linux-arm64/LICENSE Feb 13 20:07:07.304007 tar[1465]: linux-arm64/helm Feb 13 20:07:07.240557 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:07:07.244091 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:07:07.314038 jq[1485]: true Feb 13 20:07:07.244128 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:07:07.245711 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:07:07.245735 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:07:07.275340 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:07:07.338894 update_engine[1457]: I20250213 20:07:07.338635 1457 main.cc:92] Flatcar Update Engine starting Feb 13 20:07:07.350007 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1376) Feb 13 20:07:07.352190 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:07:07.353439 update_engine[1457]: I20250213 20:07:07.353377 1457 update_check_scheduler.cc:74] Next update check in 8m28s Feb 13 20:07:07.375785 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:07:07.408560 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 20:07:07.418023 systemd-logind[1456]: New seat seat0. Feb 13 20:07:07.431076 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:07:07.431096 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 20:07:07.433188 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:07:07.438564 extend-filesystems[1471]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 20:07:07.438564 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 20:07:07.438564 extend-filesystems[1471]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 20:07:07.437624 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:07:07.453753 extend-filesystems[1448]: Resized filesystem in /dev/sda9 Feb 13 20:07:07.453753 extend-filesystems[1448]: Found sr0 Feb 13 20:07:07.437847 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:07:07.483663 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:07:07.484809 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:07:07.485818 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:07:07.489001 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:07:07.511953 systemd[1]: Starting sshkeys.service... Feb 13 20:07:07.534475 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:07:07.546164 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:07:07.593565 containerd[1480]: time="2025-02-13T20:07:07.592433000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:07:07.602046 coreos-metadata[1524]: Feb 13 20:07:07.600 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 20:07:07.603767 coreos-metadata[1524]: Feb 13 20:07:07.603 INFO Fetch successful Feb 13 20:07:07.607189 unknown[1524]: wrote ssh authorized keys file for user: core Feb 13 20:07:07.634298 containerd[1480]: time="2025-02-13T20:07:07.633479320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:07:07.638458 containerd[1480]: time="2025-02-13T20:07:07.635105040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:07:07.638681 containerd[1480]: time="2025-02-13T20:07:07.638460880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:07:07.638681 containerd[1480]: time="2025-02-13T20:07:07.638500520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:07:07.640635 containerd[1480]: time="2025-02-13T20:07:07.640584640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:07:07.640717 containerd[1480]: time="2025-02-13T20:07:07.640645040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:07:07.641190 containerd[1480]: time="2025-02-13T20:07:07.641154360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:07:07.641190 containerd[1480]: time="2025-02-13T20:07:07.641186160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:07:07.641423 containerd[1480]: time="2025-02-13T20:07:07.641395360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:07:07.641423 containerd[1480]: time="2025-02-13T20:07:07.641419720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:07:07.641489 containerd[1480]: time="2025-02-13T20:07:07.641434560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:07:07.641489 containerd[1480]: time="2025-02-13T20:07:07.641445520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:07:07.641523 containerd[1480]: time="2025-02-13T20:07:07.641515080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:07:07.644997 containerd[1480]: time="2025-02-13T20:07:07.644956440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:07:07.645160 containerd[1480]: time="2025-02-13T20:07:07.645133240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:07:07.645160 containerd[1480]: time="2025-02-13T20:07:07.645154200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:07:07.645271 containerd[1480]: time="2025-02-13T20:07:07.645251960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:07:07.645317 containerd[1480]: time="2025-02-13T20:07:07.645302920Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:07:07.649587 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:07:07.648718 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.650503000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.650644280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.650667520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.650747080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.650766000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.650998000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.652063040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.652255200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.652273200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.652292040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.652306560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.652320560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.652334520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:07:07.654622 containerd[1480]: time="2025-02-13T20:07:07.652349000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652365200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652380520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652396200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652410520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652434240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652450320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652465520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652481880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652495280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652572800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652590840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652607360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652625880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655029 containerd[1480]: time="2025-02-13T20:07:07.652643760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.652657280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.652672600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.652684920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.652703520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.652727480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.652741000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.652752160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.653024160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.653050960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.653062080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.653076360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.653086920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.653107440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:07:07.655280 containerd[1480]: time="2025-02-13T20:07:07.653118120Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:07:07.655517 containerd[1480]: time="2025-02-13T20:07:07.653128640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.653431040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.653494960Z" level=info msg="Connect containerd service" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.653525840Z" level=info msg="using legacy CRI server" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.653561560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.653679720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.654423200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.654741120Z" level=info msg="Start subscribing containerd event" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.654792560Z" level=info msg="Start recovering state" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.654865000Z" level=info msg="Start event monitor" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.654934280Z" level=info msg="Start snapshots syncer" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.654947880Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.654957320Z" level=info msg="Start streaming server" Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.655670960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.655720080Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:07:07.655553 containerd[1480]: time="2025-02-13T20:07:07.655769800Z" level=info msg="containerd successfully booted in 0.064960s" Feb 13 20:07:07.656298 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:07:07.660363 systemd[1]: Finished sshkeys.service. Feb 13 20:07:07.701871 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:07:07.980192 tar[1465]: linux-arm64/README.md Feb 13 20:07:07.993972 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:07:08.121730 systemd-networkd[1370]: eth0: Gained IPv6LL Feb 13 20:07:08.122676 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Feb 13 20:07:08.129258 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:07:08.131175 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:07:08.141487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:08.143826 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:07:08.176489 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:07:08.409364 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:07:08.434289 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:07:08.444281 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:07:08.455409 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:07:08.455622 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:07:08.467134 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:07:08.479859 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:07:08.489185 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:07:08.499151 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:07:08.500860 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:07:08.569786 systemd-networkd[1370]: eth1: Gained IPv6LL Feb 13 20:07:08.570515 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Feb 13 20:07:08.971924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:08.972122 (kubelet)[1575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:08.974157 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:07:08.978739 systemd[1]: Startup finished in 829ms (kernel) + 6.564s (initrd) + 4.757s (userspace) = 12.152s. Feb 13 20:07:09.569943 kubelet[1575]: E0213 20:07:09.569823 1575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:09.572989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:09.573140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:19.727699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:07:19.742067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:19.871026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:19.872415 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:19.936316 kubelet[1595]: E0213 20:07:19.935933 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:19.939236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:19.939383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:29.977050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:07:29.983852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:30.116895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:30.121260 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:30.166650 kubelet[1611]: E0213 20:07:30.166495 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:30.169379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:30.169704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:38.776765 systemd-timesyncd[1348]: Contacted time server 45.9.61.155:123 (2.flatcar.pool.ntp.org). Feb 13 20:07:38.776855 systemd-timesyncd[1348]: Initial clock synchronization to Thu 2025-02-13 20:07:38.462700 UTC. Feb 13 20:07:40.227050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:07:40.234114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:40.381848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:40.382003 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:40.430508 kubelet[1626]: E0213 20:07:40.430417 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:40.433046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:40.433187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:50.477796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 20:07:50.486895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:50.658811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:50.659035 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:50.715996 kubelet[1641]: E0213 20:07:50.715943 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:50.718059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:50.718190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:52.836732 update_engine[1457]: I20250213 20:07:52.836596 1457 update_attempter.cc:509] Updating boot flags... Feb 13 20:07:52.910561 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1658) Feb 13 20:08:00.727430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 20:08:00.733871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:00.877839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:00.878053 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:00.926574 kubelet[1672]: E0213 20:08:00.926215 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:00.928955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:00.929162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:10.977390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 20:08:10.986938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:11.116450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:11.129045 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:11.180299 kubelet[1687]: E0213 20:08:11.180229 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:11.182447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:11.182611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:21.228030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 20:08:21.234926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:21.377604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:21.395090 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:21.445090 kubelet[1703]: E0213 20:08:21.445025 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:21.448529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:21.448885 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:31.477510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 20:08:31.491369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:31.633955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:31.640841 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:31.692192 kubelet[1718]: E0213 20:08:31.692140 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:31.694900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:31.695132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:41.727151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 20:08:41.737859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:41.873740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:41.889038 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:41.940388 kubelet[1733]: E0213 20:08:41.940204 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:41.943436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:41.943839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:51.977427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 20:08:51.987962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:52.125733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:52.133955 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:52.188832 kubelet[1748]: E0213 20:08:52.188764 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:52.192329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:52.192459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:09:01.631390 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:09:01.640588 systemd[1]: Started sshd@0-78.47.136.246:22-147.75.109.163:44678.service - OpenSSH per-connection server daemon (147.75.109.163:44678). Feb 13 20:09:02.226889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 20:09:02.236989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:02.383819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:02.393053 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:09:02.439311 kubelet[1766]: E0213 20:09:02.439183 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:09:02.444927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:09:02.445246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:09:02.645051 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 44678 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:02.646648 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:02.657363 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:09:02.670196 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:09:02.674891 systemd-logind[1456]: New session 1 of user core. Feb 13 20:09:02.687821 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:09:02.697154 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:09:02.714400 (systemd)[1775]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:09:02.838894 systemd[1775]: Queued start job for default target default.target. Feb 13 20:09:02.850324 systemd[1775]: Created slice app.slice - User Application Slice. Feb 13 20:09:02.850793 systemd[1775]: Reached target paths.target - Paths. Feb 13 20:09:02.850832 systemd[1775]: Reached target timers.target - Timers. Feb 13 20:09:02.853953 systemd[1775]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:09:02.875894 systemd[1775]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:09:02.875965 systemd[1775]: Reached target sockets.target - Sockets. Feb 13 20:09:02.875978 systemd[1775]: Reached target basic.target - Basic System. Feb 13 20:09:02.876028 systemd[1775]: Reached target default.target - Main User Target. Feb 13 20:09:02.876057 systemd[1775]: Startup finished in 153ms. Feb 13 20:09:02.876333 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:09:02.886969 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:09:03.593245 systemd[1]: Started sshd@1-78.47.136.246:22-147.75.109.163:44688.service - OpenSSH per-connection server daemon (147.75.109.163:44688). Feb 13 20:09:04.565841 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 44688 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:04.568178 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:04.575553 systemd-logind[1456]: New session 2 of user core. Feb 13 20:09:04.584905 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:09:05.244693 sshd[1786]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:05.248762 systemd[1]: sshd@1-78.47.136.246:22-147.75.109.163:44688.service: Deactivated successfully. Feb 13 20:09:05.250803 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:09:05.252738 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:09:05.254143 systemd-logind[1456]: Removed session 2. Feb 13 20:09:05.418996 systemd[1]: Started sshd@2-78.47.136.246:22-147.75.109.163:44692.service - OpenSSH per-connection server daemon (147.75.109.163:44692). Feb 13 20:09:06.406557 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 44692 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:06.408730 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:06.414905 systemd-logind[1456]: New session 3 of user core. Feb 13 20:09:06.420925 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:09:07.091867 sshd[1793]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:07.096661 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:09:07.096850 systemd[1]: sshd@2-78.47.136.246:22-147.75.109.163:44692.service: Deactivated successfully. Feb 13 20:09:07.098655 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:09:07.100930 systemd-logind[1456]: Removed session 3. Feb 13 20:09:07.267725 systemd[1]: Started sshd@3-78.47.136.246:22-147.75.109.163:44702.service - OpenSSH per-connection server daemon (147.75.109.163:44702). Feb 13 20:09:08.253100 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 44702 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:08.256801 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:08.264475 systemd-logind[1456]: New session 4 of user core. Feb 13 20:09:08.272854 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:09:08.938873 sshd[1800]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:08.944346 systemd[1]: sshd@3-78.47.136.246:22-147.75.109.163:44702.service: Deactivated successfully. Feb 13 20:09:08.948434 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:09:08.950569 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:09:08.954182 systemd-logind[1456]: Removed session 4. Feb 13 20:09:09.111752 systemd[1]: Started sshd@4-78.47.136.246:22-147.75.109.163:44706.service - OpenSSH per-connection server daemon (147.75.109.163:44706). Feb 13 20:09:10.107039 sshd[1807]: Accepted publickey for core from 147.75.109.163 port 44706 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:10.109962 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:10.116525 systemd-logind[1456]: New session 5 of user core. Feb 13 20:09:10.125009 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:09:10.638863 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:09:10.639158 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:09:10.658203 sudo[1810]: pam_unix(sudo:session): session closed for user root Feb 13 20:09:10.818785 sshd[1807]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:10.824359 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:09:10.824853 systemd[1]: sshd@4-78.47.136.246:22-147.75.109.163:44706.service: Deactivated successfully. Feb 13 20:09:10.828023 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:09:10.832374 systemd-logind[1456]: Removed session 5. Feb 13 20:09:10.992047 systemd[1]: Started sshd@5-78.47.136.246:22-147.75.109.163:32966.service - OpenSSH per-connection server daemon (147.75.109.163:32966). Feb 13 20:09:11.994995 sshd[1815]: Accepted publickey for core from 147.75.109.163 port 32966 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:11.997290 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:12.002935 systemd-logind[1456]: New session 6 of user core. Feb 13 20:09:12.011859 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:09:12.477030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 20:09:12.482948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:12.520824 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:09:12.521159 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:09:12.529826 sudo[1822]: pam_unix(sudo:session): session closed for user root Feb 13 20:09:12.538550 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:09:12.538962 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:09:12.560275 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:09:12.579114 auditctl[1825]: No rules Feb 13 20:09:12.580753 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:09:12.581194 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:09:12.594964 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:09:12.625991 augenrules[1847]: No rules Feb 13 20:09:12.628066 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:09:12.629864 sudo[1821]: pam_unix(sudo:session): session closed for user root Feb 13 20:09:12.636784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:12.653846 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:09:12.709289 kubelet[1853]: E0213 20:09:12.709160 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:09:12.712828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:09:12.713054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:09:12.793102 sshd[1815]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:12.797323 systemd[1]: sshd@5-78.47.136.246:22-147.75.109.163:32966.service: Deactivated successfully. Feb 13 20:09:12.799725 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:09:12.801950 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:09:12.803716 systemd-logind[1456]: Removed session 6. Feb 13 20:09:12.971339 systemd[1]: Started sshd@6-78.47.136.246:22-147.75.109.163:32978.service - OpenSSH per-connection server daemon (147.75.109.163:32978). Feb 13 20:09:13.947560 sshd[1864]: Accepted publickey for core from 147.75.109.163 port 32978 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:13.950396 sshd[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:13.958559 systemd-logind[1456]: New session 7 of user core. Feb 13 20:09:13.964949 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:09:14.469820 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:09:14.470101 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:09:14.831167 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:09:14.831579 (dockerd)[1883]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:09:15.127397 dockerd[1883]: time="2025-02-13T20:09:15.126942926Z" level=info msg="Starting up" Feb 13 20:09:15.246110 dockerd[1883]: time="2025-02-13T20:09:15.246031405Z" level=info msg="Loading containers: start." Feb 13 20:09:15.364583 kernel: Initializing XFRM netlink socket Feb 13 20:09:15.455344 systemd-networkd[1370]: docker0: Link UP Feb 13 20:09:15.479563 dockerd[1883]: time="2025-02-13T20:09:15.479503063Z" level=info msg="Loading containers: done." Feb 13 20:09:15.498169 dockerd[1883]: time="2025-02-13T20:09:15.497905114Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:09:15.498169 dockerd[1883]: time="2025-02-13T20:09:15.498049357Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:09:15.498710 dockerd[1883]: time="2025-02-13T20:09:15.498464163Z" level=info msg="Daemon has completed initialization" Feb 13 20:09:15.553740 dockerd[1883]: time="2025-02-13T20:09:15.553556396Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:09:15.555371 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:09:16.211649 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1885400868-merged.mount: Deactivated successfully. Feb 13 20:09:16.297807 containerd[1480]: time="2025-02-13T20:09:16.297714883Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:09:17.019092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835662570.mount: Deactivated successfully. Feb 13 20:09:17.925767 containerd[1480]: time="2025-02-13T20:09:17.924900252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:17.927582 containerd[1480]: time="2025-02-13T20:09:17.927334356Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218328" Feb 13 20:09:17.928915 containerd[1480]: time="2025-02-13T20:09:17.928851106Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:17.935518 containerd[1480]: time="2025-02-13T20:09:17.933865352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:17.935518 containerd[1480]: time="2025-02-13T20:09:17.935179143Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 1.63741082s" Feb 13 20:09:17.935518 containerd[1480]: time="2025-02-13T20:09:17.935219303Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 20:09:17.936401 containerd[1480]: time="2025-02-13T20:09:17.936367775Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:09:19.035736 containerd[1480]: time="2025-02-13T20:09:19.035627636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:19.037574 containerd[1480]: time="2025-02-13T20:09:19.037340227Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528165" Feb 13 20:09:19.038802 containerd[1480]: time="2025-02-13T20:09:19.038741579Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:19.043014 containerd[1480]: time="2025-02-13T20:09:19.042930716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:19.045587 containerd[1480]: time="2025-02-13T20:09:19.044595387Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.107816974s" Feb 13 20:09:19.045587 containerd[1480]: time="2025-02-13T20:09:19.044657027Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 20:09:19.046395 containerd[1480]: time="2025-02-13T20:09:19.046030459Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:09:20.030944 containerd[1480]: time="2025-02-13T20:09:20.030857183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:20.034139 containerd[1480]: time="2025-02-13T20:09:20.034056087Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480820" Feb 13 20:09:20.036128 containerd[1480]: time="2025-02-13T20:09:20.035561640Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:20.041565 containerd[1480]: time="2025-02-13T20:09:20.040791135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:20.042506 containerd[1480]: time="2025-02-13T20:09:20.042447647Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 996.355228ms" Feb 13 20:09:20.042506 containerd[1480]: time="2025-02-13T20:09:20.042499566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 20:09:20.043653 containerd[1480]: time="2025-02-13T20:09:20.043618121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:09:21.045749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370922590.mount: Deactivated successfully. Feb 13 20:09:21.360722 containerd[1480]: time="2025-02-13T20:09:21.360481093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:21.363981 containerd[1480]: time="2025-02-13T20:09:21.363802359Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363408" Feb 13 20:09:21.366588 containerd[1480]: time="2025-02-13T20:09:21.365801470Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:21.369268 containerd[1480]: time="2025-02-13T20:09:21.369170696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:21.370090 containerd[1480]: time="2025-02-13T20:09:21.369888933Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.326087133s" Feb 13 20:09:21.370090 containerd[1480]: time="2025-02-13T20:09:21.369931972Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 20:09:21.370674 containerd[1480]: time="2025-02-13T20:09:21.370624730Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:09:21.990250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995224637.mount: Deactivated successfully. Feb 13 20:09:22.687001 containerd[1480]: time="2025-02-13T20:09:22.686939246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:22.689576 containerd[1480]: time="2025-02-13T20:09:22.688521200Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Feb 13 20:09:22.689576 containerd[1480]: time="2025-02-13T20:09:22.689419757Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:22.699032 containerd[1480]: time="2025-02-13T20:09:22.698976081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:22.700901 containerd[1480]: time="2025-02-13T20:09:22.700842634Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.330159746s" Feb 13 20:09:22.700901 containerd[1480]: time="2025-02-13T20:09:22.700899514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 20:09:22.702563 containerd[1480]: time="2025-02-13T20:09:22.702483268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:09:22.726822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Feb 13 20:09:22.733921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:22.876496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:22.893059 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:09:22.942248 kubelet[2152]: E0213 20:09:22.942018 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:09:22.945279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:09:22.945638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:09:23.214376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121707944.mount: Deactivated successfully. Feb 13 20:09:23.221956 containerd[1480]: time="2025-02-13T20:09:23.221891139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:23.222933 containerd[1480]: time="2025-02-13T20:09:23.222887696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Feb 13 20:09:23.223958 containerd[1480]: time="2025-02-13T20:09:23.223890493Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:23.226760 containerd[1480]: time="2025-02-13T20:09:23.226703244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:23.227877 containerd[1480]: time="2025-02-13T20:09:23.227683521Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 525.011053ms" Feb 13 20:09:23.227877 containerd[1480]: time="2025-02-13T20:09:23.227724241Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:09:23.228346 containerd[1480]: time="2025-02-13T20:09:23.228326359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:09:23.806918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193803798.mount: Deactivated successfully. Feb 13 20:09:25.159985 containerd[1480]: time="2025-02-13T20:09:25.158667614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:25.161622 containerd[1480]: time="2025-02-13T20:09:25.161579328Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812491" Feb 13 20:09:25.163964 containerd[1480]: time="2025-02-13T20:09:25.163918123Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:25.168284 containerd[1480]: time="2025-02-13T20:09:25.168230994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:25.170876 containerd[1480]: time="2025-02-13T20:09:25.170817069Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.942235551s" Feb 13 20:09:25.171162 containerd[1480]: time="2025-02-13T20:09:25.171137268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 20:09:31.244908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:31.255294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:31.288127 systemd[1]: Reloading requested from client PID 2247 ('systemctl') (unit session-7.scope)... Feb 13 20:09:31.288300 systemd[1]: Reloading... Feb 13 20:09:31.406574 zram_generator::config[2293]: No configuration found. Feb 13 20:09:31.504306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:09:31.572697 systemd[1]: Reloading finished in 283 ms. Feb 13 20:09:31.633048 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:09:31.633412 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:09:31.633869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:31.645124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:31.774409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:31.789197 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:09:31.859564 kubelet[2335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:31.859564 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:09:31.859564 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:31.860813 kubelet[2335]: I0213 20:09:31.859723 2335 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:09:33.393566 kubelet[2335]: I0213 20:09:33.392404 2335 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:09:33.393566 kubelet[2335]: I0213 20:09:33.392449 2335 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:09:33.393566 kubelet[2335]: I0213 20:09:33.392922 2335 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:09:33.434450 kubelet[2335]: E0213 20:09:33.434402 2335 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://78.47.136.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:33.436621 kubelet[2335]: I0213 20:09:33.436571 2335 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:09:33.446753 kubelet[2335]: E0213 20:09:33.446707 2335 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:09:33.446753 kubelet[2335]: I0213 20:09:33.446747 2335 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:09:33.450584 kubelet[2335]: I0213 20:09:33.450515 2335 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:09:33.451505 kubelet[2335]: I0213 20:09:33.451418 2335 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:09:33.451710 kubelet[2335]: I0213 20:09:33.451477 2335 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-c-c4549fc0d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:09:33.451804 kubelet[2335]: I0213 20:09:33.451768 2335 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:09:33.451804 kubelet[2335]: I0213 20:09:33.451778 2335 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:09:33.452011 kubelet[2335]: I0213 20:09:33.451980 2335 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:33.455923 kubelet[2335]: I0213 20:09:33.455716 2335 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:09:33.455923 kubelet[2335]: I0213 20:09:33.455757 2335 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:09:33.455923 kubelet[2335]: I0213 20:09:33.455778 2335 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:09:33.455923 kubelet[2335]: I0213 20:09:33.455795 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:09:33.460623 kubelet[2335]: W0213 20:09:33.460004 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.136.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 78.47.136.246:6443: connect: connection refused Feb 13 20:09:33.460623 kubelet[2335]: E0213 20:09:33.460078 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://78.47.136.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:33.460623 kubelet[2335]: W0213 20:09:33.460154 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.136.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-c-c4549fc0d2&limit=500&resourceVersion=0": dial tcp 78.47.136.246:6443: connect: connection refused Feb 13 20:09:33.460623 kubelet[2335]: E0213 20:09:33.460182 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://78.47.136.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-c-c4549fc0d2&limit=500&resourceVersion=0\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:33.461108 kubelet[2335]: I0213 20:09:33.460982 2335 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:09:33.462567 kubelet[2335]: I0213 20:09:33.461695 2335 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:09:33.462567 kubelet[2335]: W0213 20:09:33.461835 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:09:33.464456 kubelet[2335]: I0213 20:09:33.464416 2335 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:09:33.464456 kubelet[2335]: I0213 20:09:33.464463 2335 server.go:1287] "Started kubelet" Feb 13 20:09:33.472469 kubelet[2335]: I0213 20:09:33.472417 2335 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:09:33.474631 kubelet[2335]: I0213 20:09:33.474589 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:09:33.474833 kubelet[2335]: I0213 20:09:33.474590 2335 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:09:33.477236 kubelet[2335]: I0213 20:09:33.477160 2335 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:09:33.477693 kubelet[2335]: I0213 20:09:33.477676 2335 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:09:33.481369 kubelet[2335]: E0213 20:09:33.481075 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.136.246:6443/api/v1/namespaces/default/events\": dial tcp 78.47.136.246:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-1-c-c4549fc0d2.1823dd747c2d6f82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-c-c4549fc0d2,UID:ci-4081-3-1-c-c4549fc0d2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-c-c4549fc0d2,},FirstTimestamp:2025-02-13 20:09:33.464440706 +0000 UTC m=+1.671333025,LastTimestamp:2025-02-13 20:09:33.464440706 +0000 UTC m=+1.671333025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-c-c4549fc0d2,}" Feb 13 20:09:33.481924 kubelet[2335]: I0213 20:09:33.481871 2335 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:09:33.482246 kubelet[2335]: E0213 20:09:33.482217 2335 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" Feb 13 20:09:33.483641 kubelet[2335]: I0213 20:09:33.482385 2335 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:09:33.483641 kubelet[2335]: I0213 20:09:33.483405 2335 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:09:33.483641 kubelet[2335]: I0213 20:09:33.483479 2335 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:09:33.485685 kubelet[2335]: W0213 20:09:33.485617 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.136.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.136.246:6443: connect: connection refused Feb 13 20:09:33.485756 kubelet[2335]: E0213 20:09:33.485696 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://78.47.136.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:33.485800 kubelet[2335]: E0213 20:09:33.485771 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.136.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-c-c4549fc0d2?timeout=10s\": dial tcp 78.47.136.246:6443: connect: connection refused" interval="200ms" Feb 13 20:09:33.487145 kubelet[2335]: E0213 20:09:33.487035 2335 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:09:33.488959 kubelet[2335]: I0213 20:09:33.488920 2335 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:09:33.488959 kubelet[2335]: I0213 20:09:33.488945 2335 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:09:33.489153 kubelet[2335]: I0213 20:09:33.489034 2335 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:09:33.511700 kubelet[2335]: I0213 20:09:33.511637 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:09:33.516205 kubelet[2335]: I0213 20:09:33.516167 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:09:33.516703 kubelet[2335]: I0213 20:09:33.516684 2335 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:09:33.520023 kubelet[2335]: I0213 20:09:33.519978 2335 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:09:33.520189 kubelet[2335]: I0213 20:09:33.520177 2335 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:09:33.520417 kubelet[2335]: E0213 20:09:33.520395 2335 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:09:33.524793 kubelet[2335]: W0213 20:09:33.524730 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.136.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.136.246:6443: connect: connection refused Feb 13 20:09:33.524996 kubelet[2335]: E0213 20:09:33.524802 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://78.47.136.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:33.527474 kubelet[2335]: I0213 20:09:33.527390 2335 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:09:33.527474 kubelet[2335]: I0213 20:09:33.527450 2335 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:09:33.527474 kubelet[2335]: I0213 20:09:33.527475 2335 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:33.531261 kubelet[2335]: I0213 20:09:33.531216 2335 policy_none.go:49] "None policy: Start" Feb 13 20:09:33.531261 kubelet[2335]: I0213 20:09:33.531259 2335 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:09:33.531261 kubelet[2335]: I0213 20:09:33.531278 2335 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:09:33.540365 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:09:33.552146 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:09:33.558074 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:09:33.566094 kubelet[2335]: I0213 20:09:33.566007 2335 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:09:33.566573 kubelet[2335]: I0213 20:09:33.566349 2335 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:09:33.566573 kubelet[2335]: I0213 20:09:33.566392 2335 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:09:33.567010 kubelet[2335]: I0213 20:09:33.566943 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:09:33.569978 kubelet[2335]: E0213 20:09:33.569890 2335 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:09:33.569978 kubelet[2335]: E0213 20:09:33.569958 2335 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-1-c-c4549fc0d2\" not found" Feb 13 20:09:33.640730 systemd[1]: Created slice kubepods-burstable-podf97fcf455b5d33e5424d7c07d958d2bb.slice - libcontainer container kubepods-burstable-podf97fcf455b5d33e5424d7c07d958d2bb.slice. Feb 13 20:09:33.668868 kubelet[2335]: E0213 20:09:33.668749 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.670261 kubelet[2335]: I0213 20:09:33.669158 2335 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.670959 kubelet[2335]: E0213 20:09:33.670880 2335 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://78.47.136.246:6443/api/v1/nodes\": dial tcp 78.47.136.246:6443: connect: connection refused" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.676728 systemd[1]: Created slice kubepods-burstable-pod9e751c01e356f7e42fa77a4055cdcd2e.slice - libcontainer container kubepods-burstable-pod9e751c01e356f7e42fa77a4055cdcd2e.slice. Feb 13 20:09:33.679318 kubelet[2335]: E0213 20:09:33.679065 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.681275 systemd[1]: Created slice kubepods-burstable-pod677bd5166676674372e8dde0aec11596.slice - libcontainer container kubepods-burstable-pod677bd5166676674372e8dde0aec11596.slice. Feb 13 20:09:33.683677 kubelet[2335]: E0213 20:09:33.683629 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.687088 kubelet[2335]: E0213 20:09:33.687026 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.136.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-c-c4549fc0d2?timeout=10s\": dial tcp 78.47.136.246:6443: connect: connection refused" interval="400ms" Feb 13 20:09:33.785273 kubelet[2335]: I0213 20:09:33.784784 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.785273 kubelet[2335]: I0213 20:09:33.784851 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.785273 kubelet[2335]: I0213 20:09:33.784883 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.785273 kubelet[2335]: I0213 20:09:33.784922 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f97fcf455b5d33e5424d7c07d958d2bb-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-c-c4549fc0d2\" (UID: \"f97fcf455b5d33e5424d7c07d958d2bb\") " pod="kube-system/kube-scheduler-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.785273 kubelet[2335]: I0213 20:09:33.784956 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e751c01e356f7e42fa77a4055cdcd2e-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-c-c4549fc0d2\" (UID: \"9e751c01e356f7e42fa77a4055cdcd2e\") " pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.786017 kubelet[2335]: I0213 20:09:33.784985 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.786017 kubelet[2335]: I0213 20:09:33.785015 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e751c01e356f7e42fa77a4055cdcd2e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-c-c4549fc0d2\" (UID: \"9e751c01e356f7e42fa77a4055cdcd2e\") " pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.786017 kubelet[2335]: I0213 20:09:33.785048 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e751c01e356f7e42fa77a4055cdcd2e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-c-c4549fc0d2\" (UID: \"9e751c01e356f7e42fa77a4055cdcd2e\") " pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.786017 kubelet[2335]: I0213 20:09:33.785081 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.873842 kubelet[2335]: I0213 20:09:33.873464 2335 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.873842 kubelet[2335]: E0213 20:09:33.873810 2335 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://78.47.136.246:6443/api/v1/nodes\": dial tcp 78.47.136.246:6443: connect: connection refused" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:33.971408 containerd[1480]: time="2025-02-13T20:09:33.970962629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-c-c4549fc0d2,Uid:f97fcf455b5d33e5424d7c07d958d2bb,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:33.980736 containerd[1480]: time="2025-02-13T20:09:33.980600003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-c-c4549fc0d2,Uid:9e751c01e356f7e42fa77a4055cdcd2e,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:33.985567 containerd[1480]: time="2025-02-13T20:09:33.985108090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-c-c4549fc0d2,Uid:677bd5166676674372e8dde0aec11596,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:34.088511 kubelet[2335]: E0213 20:09:34.088448 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.136.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-c-c4549fc0d2?timeout=10s\": dial tcp 78.47.136.246:6443: connect: connection refused" interval="800ms" Feb 13 20:09:34.268321 kubelet[2335]: W0213 20:09:34.268187 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.136.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 78.47.136.246:6443: connect: connection refused Feb 13 20:09:34.268321 kubelet[2335]: E0213 20:09:34.268282 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://78.47.136.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:34.278226 kubelet[2335]: I0213 20:09:34.278192 2335 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:34.278730 kubelet[2335]: E0213 20:09:34.278697 2335 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://78.47.136.246:6443/api/v1/nodes\": dial tcp 78.47.136.246:6443: connect: connection refused" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:34.503662 kubelet[2335]: W0213 20:09:34.503574 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.136.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-c-c4549fc0d2&limit=500&resourceVersion=0": dial tcp 78.47.136.246:6443: connect: connection refused Feb 13 20:09:34.504724 kubelet[2335]: E0213 20:09:34.504625 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://78.47.136.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-c-c4549fc0d2&limit=500&resourceVersion=0\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:34.505605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114927991.mount: Deactivated successfully. Feb 13 20:09:34.519015 containerd[1480]: time="2025-02-13T20:09:34.518776251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:34.522244 containerd[1480]: time="2025-02-13T20:09:34.522197658Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:34.524599 containerd[1480]: time="2025-02-13T20:09:34.524533702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 20:09:34.527824 containerd[1480]: time="2025-02-13T20:09:34.527010066Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:34.529529 containerd[1480]: time="2025-02-13T20:09:34.529318430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:09:34.532299 containerd[1480]: time="2025-02-13T20:09:34.532245876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:09:34.533135 containerd[1480]: time="2025-02-13T20:09:34.533062837Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:34.534638 containerd[1480]: time="2025-02-13T20:09:34.534500800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:34.535556 containerd[1480]: time="2025-02-13T20:09:34.535503602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.447812ms" Feb 13 20:09:34.538407 containerd[1480]: time="2025-02-13T20:09:34.538072126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.857556ms" Feb 13 20:09:34.554302 containerd[1480]: time="2025-02-13T20:09:34.554246636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.548953ms" Feb 13 20:09:34.663426 containerd[1480]: time="2025-02-13T20:09:34.662939113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:34.663426 containerd[1480]: time="2025-02-13T20:09:34.662999073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:34.663426 containerd[1480]: time="2025-02-13T20:09:34.663010313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:34.663426 containerd[1480]: time="2025-02-13T20:09:34.663094873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:34.663716 containerd[1480]: time="2025-02-13T20:09:34.663608434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:34.663716 containerd[1480]: time="2025-02-13T20:09:34.663666914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:34.663716 containerd[1480]: time="2025-02-13T20:09:34.663683914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:34.663821 containerd[1480]: time="2025-02-13T20:09:34.663761154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:34.675477 containerd[1480]: time="2025-02-13T20:09:34.675342495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:34.675477 containerd[1480]: time="2025-02-13T20:09:34.675416175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:34.675477 containerd[1480]: time="2025-02-13T20:09:34.675446015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:34.675880 containerd[1480]: time="2025-02-13T20:09:34.675736976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:34.692220 systemd[1]: Started cri-containerd-9b87562b9b2ffdee43ac8f60ca593b7dd1758e04d980dcf0eb7e2654eb072935.scope - libcontainer container 9b87562b9b2ffdee43ac8f60ca593b7dd1758e04d980dcf0eb7e2654eb072935. Feb 13 20:09:34.703228 systemd[1]: Started cri-containerd-afea08c9b3da94e611f264719c7e235da29a7eb3374e2e0ee55d4f761d9c1f1f.scope - libcontainer container afea08c9b3da94e611f264719c7e235da29a7eb3374e2e0ee55d4f761d9c1f1f. Feb 13 20:09:34.711764 systemd[1]: Started cri-containerd-c46e12c2d6da04adabcbe6f64317aab3a8e9091a7b473210c0f0f410438acfb1.scope - libcontainer container c46e12c2d6da04adabcbe6f64317aab3a8e9091a7b473210c0f0f410438acfb1. Feb 13 20:09:34.781530 containerd[1480]: time="2025-02-13T20:09:34.781145167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-c-c4549fc0d2,Uid:677bd5166676674372e8dde0aec11596,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b87562b9b2ffdee43ac8f60ca593b7dd1758e04d980dcf0eb7e2654eb072935\"" Feb 13 20:09:34.785773 containerd[1480]: time="2025-02-13T20:09:34.785685095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-c-c4549fc0d2,Uid:9e751c01e356f7e42fa77a4055cdcd2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c46e12c2d6da04adabcbe6f64317aab3a8e9091a7b473210c0f0f410438acfb1\"" Feb 13 20:09:34.792088 containerd[1480]: time="2025-02-13T20:09:34.791713546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-c-c4549fc0d2,Uid:f97fcf455b5d33e5424d7c07d958d2bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"afea08c9b3da94e611f264719c7e235da29a7eb3374e2e0ee55d4f761d9c1f1f\"" Feb 13 20:09:34.792600 containerd[1480]: time="2025-02-13T20:09:34.792560788Z" level=info msg="CreateContainer within sandbox \"9b87562b9b2ffdee43ac8f60ca593b7dd1758e04d980dcf0eb7e2654eb072935\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:09:34.799124 containerd[1480]: time="2025-02-13T20:09:34.799087760Z" level=info msg="CreateContainer within sandbox \"afea08c9b3da94e611f264719c7e235da29a7eb3374e2e0ee55d4f761d9c1f1f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:09:34.799624 containerd[1480]: time="2025-02-13T20:09:34.799585081Z" level=info msg="CreateContainer within sandbox \"c46e12c2d6da04adabcbe6f64317aab3a8e9091a7b473210c0f0f410438acfb1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:09:34.823489 containerd[1480]: time="2025-02-13T20:09:34.823344724Z" level=info msg="CreateContainer within sandbox \"9b87562b9b2ffdee43ac8f60ca593b7dd1758e04d980dcf0eb7e2654eb072935\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a374ca83a33052424b01772151a0cd12619e6b609faf2f7228f9aeff62674c6d\"" Feb 13 20:09:34.824865 containerd[1480]: time="2025-02-13T20:09:34.824757966Z" level=info msg="StartContainer for \"a374ca83a33052424b01772151a0cd12619e6b609faf2f7228f9aeff62674c6d\"" Feb 13 20:09:34.831527 containerd[1480]: time="2025-02-13T20:09:34.831455138Z" level=info msg="CreateContainer within sandbox \"c46e12c2d6da04adabcbe6f64317aab3a8e9091a7b473210c0f0f410438acfb1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e79e32b31a57b0ab2ccb2f47ef31e16082079983435c0230759612f0df40e4d2\"" Feb 13 20:09:34.832081 containerd[1480]: time="2025-02-13T20:09:34.832017819Z" level=info msg="StartContainer for \"e79e32b31a57b0ab2ccb2f47ef31e16082079983435c0230759612f0df40e4d2\"" Feb 13 20:09:34.842705 containerd[1480]: time="2025-02-13T20:09:34.842533678Z" level=info msg="CreateContainer within sandbox \"afea08c9b3da94e611f264719c7e235da29a7eb3374e2e0ee55d4f761d9c1f1f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e5fc71dffd6d2873debb36b9754476cd76c54193dfa56dd399a3be8a86dbaf34\"" Feb 13 20:09:34.847533 containerd[1480]: time="2025-02-13T20:09:34.847481207Z" level=info msg="StartContainer for \"e5fc71dffd6d2873debb36b9754476cd76c54193dfa56dd399a3be8a86dbaf34\"" Feb 13 20:09:34.869812 systemd[1]: Started cri-containerd-a374ca83a33052424b01772151a0cd12619e6b609faf2f7228f9aeff62674c6d.scope - libcontainer container a374ca83a33052424b01772151a0cd12619e6b609faf2f7228f9aeff62674c6d. Feb 13 20:09:34.875340 systemd[1]: Started cri-containerd-e79e32b31a57b0ab2ccb2f47ef31e16082079983435c0230759612f0df40e4d2.scope - libcontainer container e79e32b31a57b0ab2ccb2f47ef31e16082079983435c0230759612f0df40e4d2. Feb 13 20:09:34.890489 kubelet[2335]: E0213 20:09:34.890405 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.136.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-c-c4549fc0d2?timeout=10s\": dial tcp 78.47.136.246:6443: connect: connection refused" interval="1.6s" Feb 13 20:09:34.905994 kubelet[2335]: W0213 20:09:34.905696 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.136.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.136.246:6443: connect: connection refused Feb 13 20:09:34.905994 kubelet[2335]: E0213 20:09:34.905768 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://78.47.136.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:34.913946 systemd[1]: Started cri-containerd-e5fc71dffd6d2873debb36b9754476cd76c54193dfa56dd399a3be8a86dbaf34.scope - libcontainer container e5fc71dffd6d2873debb36b9754476cd76c54193dfa56dd399a3be8a86dbaf34. Feb 13 20:09:34.938132 containerd[1480]: time="2025-02-13T20:09:34.937956811Z" level=info msg="StartContainer for \"e79e32b31a57b0ab2ccb2f47ef31e16082079983435c0230759612f0df40e4d2\" returns successfully" Feb 13 20:09:34.966226 containerd[1480]: time="2025-02-13T20:09:34.966070542Z" level=info msg="StartContainer for \"a374ca83a33052424b01772151a0cd12619e6b609faf2f7228f9aeff62674c6d\" returns successfully" Feb 13 20:09:35.016915 containerd[1480]: time="2025-02-13T20:09:35.016782160Z" level=info msg="StartContainer for \"e5fc71dffd6d2873debb36b9754476cd76c54193dfa56dd399a3be8a86dbaf34\" returns successfully" Feb 13 20:09:35.030696 kubelet[2335]: W0213 20:09:35.030228 2335 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.136.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.136.246:6443: connect: connection refused Feb 13 20:09:35.030696 kubelet[2335]: E0213 20:09:35.030343 2335 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://78.47.136.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.47.136.246:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:35.083104 kubelet[2335]: I0213 20:09:35.082471 2335 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:35.084346 kubelet[2335]: E0213 20:09:35.083854 2335 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://78.47.136.246:6443/api/v1/nodes\": dial tcp 78.47.136.246:6443: connect: connection refused" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:35.540743 kubelet[2335]: E0213 20:09:35.539866 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:35.545201 kubelet[2335]: E0213 20:09:35.544941 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:35.551154 kubelet[2335]: E0213 20:09:35.550877 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:36.552906 kubelet[2335]: E0213 20:09:36.552680 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:36.552906 kubelet[2335]: E0213 20:09:36.552775 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:36.688159 kubelet[2335]: I0213 20:09:36.687738 2335 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.045249 kubelet[2335]: E0213 20:09:38.044937 2335 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.130951 kubelet[2335]: E0213 20:09:38.130877 2335 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-1-c-c4549fc0d2\" not found" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.163995 kubelet[2335]: I0213 20:09:38.163701 2335 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.184057 kubelet[2335]: I0213 20:09:38.183923 2335 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.199114 kubelet[2335]: E0213 20:09:38.199028 2335 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-1-c-c4549fc0d2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.199114 kubelet[2335]: I0213 20:09:38.199072 2335 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.202291 kubelet[2335]: E0213 20:09:38.202241 2335 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-1-c-c4549fc0d2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.202291 kubelet[2335]: I0213 20:09:38.202281 2335 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.207708 kubelet[2335]: E0213 20:09:38.207651 2335 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.211432 kubelet[2335]: I0213 20:09:38.211121 2335 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.216393 kubelet[2335]: E0213 20:09:38.216312 2335 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:38.463692 kubelet[2335]: I0213 20:09:38.463065 2335 apiserver.go:52] "Watching apiserver" Feb 13 20:09:38.484202 kubelet[2335]: I0213 20:09:38.483718 2335 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:09:40.749358 systemd[1]: Reloading requested from client PID 2610 ('systemctl') (unit session-7.scope)... Feb 13 20:09:40.749481 systemd[1]: Reloading... Feb 13 20:09:40.852689 zram_generator::config[2650]: No configuration found. Feb 13 20:09:40.962423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:09:41.056530 systemd[1]: Reloading finished in 306 ms. Feb 13 20:09:41.108488 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:41.117309 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:09:41.117675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:41.117746 systemd[1]: kubelet.service: Consumed 2.164s CPU time, 124.4M memory peak, 0B memory swap peak. Feb 13 20:09:41.131237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:41.257919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:41.258298 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:09:41.314011 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:41.314011 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:09:41.314011 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:41.314011 kubelet[2695]: I0213 20:09:41.312515 2695 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:09:41.320835 kubelet[2695]: I0213 20:09:41.320797 2695 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:09:41.321606 kubelet[2695]: I0213 20:09:41.321042 2695 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:09:41.321606 kubelet[2695]: I0213 20:09:41.321428 2695 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:09:41.322945 kubelet[2695]: I0213 20:09:41.322916 2695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:09:41.326602 kubelet[2695]: I0213 20:09:41.326562 2695 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:09:41.331932 kubelet[2695]: E0213 20:09:41.331844 2695 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:09:41.331932 kubelet[2695]: I0213 20:09:41.331922 2695 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:09:41.338549 kubelet[2695]: I0213 20:09:41.336452 2695 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:09:41.340435 kubelet[2695]: I0213 20:09:41.338903 2695 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:09:41.340435 kubelet[2695]: I0213 20:09:41.339064 2695 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-c-c4549fc0d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:09:41.340435 kubelet[2695]: I0213 20:09:41.339371 2695 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:09:41.340435 kubelet[2695]: I0213 20:09:41.339383 2695 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:09:41.340693 kubelet[2695]: I0213 20:09:41.339444 2695 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:41.341367 kubelet[2695]: I0213 20:09:41.341014 2695 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:09:41.341367 kubelet[2695]: I0213 20:09:41.341071 2695 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:09:41.341367 kubelet[2695]: I0213 20:09:41.341114 2695 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:09:41.341367 kubelet[2695]: I0213 20:09:41.341142 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:09:41.347554 kubelet[2695]: I0213 20:09:41.345743 2695 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:09:41.348218 kubelet[2695]: I0213 20:09:41.348197 2695 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:09:41.350555 kubelet[2695]: I0213 20:09:41.348796 2695 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:09:41.350734 kubelet[2695]: I0213 20:09:41.350717 2695 server.go:1287] "Started kubelet" Feb 13 20:09:41.351241 kubelet[2695]: I0213 20:09:41.351183 2695 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:09:41.351560 kubelet[2695]: I0213 20:09:41.351494 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:09:41.353548 kubelet[2695]: I0213 20:09:41.351873 2695 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:09:41.353719 kubelet[2695]: I0213 20:09:41.353685 2695 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:09:41.358047 kubelet[2695]: I0213 20:09:41.358020 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:09:41.367563 kubelet[2695]: I0213 20:09:41.366161 2695 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:09:41.370838 kubelet[2695]: I0213 20:09:41.369242 2695 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:09:41.371294 kubelet[2695]: E0213 20:09:41.371267 2695 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-1-c-c4549fc0d2\" not found" Feb 13 20:09:41.374934 kubelet[2695]: I0213 20:09:41.371978 2695 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:09:41.374934 kubelet[2695]: I0213 20:09:41.372100 2695 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:09:41.393261 kubelet[2695]: I0213 20:09:41.393202 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:09:41.394184 kubelet[2695]: I0213 20:09:41.394160 2695 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:09:41.394420 kubelet[2695]: I0213 20:09:41.394398 2695 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:09:41.397088 kubelet[2695]: I0213 20:09:41.397043 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:09:41.397088 kubelet[2695]: I0213 20:09:41.397079 2695 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:09:41.397206 kubelet[2695]: I0213 20:09:41.397101 2695 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:09:41.397206 kubelet[2695]: I0213 20:09:41.397110 2695 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:09:41.397206 kubelet[2695]: E0213 20:09:41.397163 2695 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:09:41.401941 kubelet[2695]: I0213 20:09:41.401891 2695 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:09:41.463780 kubelet[2695]: I0213 20:09:41.463730 2695 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:09:41.463780 kubelet[2695]: I0213 20:09:41.463758 2695 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:09:41.463780 kubelet[2695]: I0213 20:09:41.463783 2695 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:41.463991 kubelet[2695]: I0213 20:09:41.463966 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:09:41.463991 kubelet[2695]: I0213 20:09:41.463979 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:09:41.464049 kubelet[2695]: I0213 20:09:41.463997 2695 policy_none.go:49] "None policy: Start" Feb 13 20:09:41.464049 kubelet[2695]: I0213 20:09:41.464006 2695 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:09:41.464049 kubelet[2695]: I0213 20:09:41.464015 2695 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:09:41.464127 kubelet[2695]: I0213 20:09:41.464121 2695 state_mem.go:75] "Updated machine memory state" Feb 13 20:09:41.470038 kubelet[2695]: I0213 20:09:41.470004 2695 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:09:41.470248 kubelet[2695]: I0213 20:09:41.470228 2695 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:09:41.470291 kubelet[2695]: I0213 20:09:41.470251 2695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:09:41.471112 kubelet[2695]: I0213 20:09:41.471018 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:09:41.473253 kubelet[2695]: E0213 20:09:41.473216 2695 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:09:41.498388 kubelet[2695]: I0213 20:09:41.498303 2695 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.500850 kubelet[2695]: I0213 20:09:41.500812 2695 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.501246 kubelet[2695]: I0213 20:09:41.501214 2695 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.576753 kubelet[2695]: I0213 20:09:41.574508 2695 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.586298 kubelet[2695]: I0213 20:09:41.586248 2695 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.586457 kubelet[2695]: I0213 20:09:41.586373 2695 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.673743 kubelet[2695]: I0213 20:09:41.673035 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.673743 kubelet[2695]: I0213 20:09:41.673168 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.673743 kubelet[2695]: I0213 20:09:41.673211 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e751c01e356f7e42fa77a4055cdcd2e-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-c-c4549fc0d2\" (UID: \"9e751c01e356f7e42fa77a4055cdcd2e\") " pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.673743 kubelet[2695]: I0213 20:09:41.673249 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e751c01e356f7e42fa77a4055cdcd2e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-c-c4549fc0d2\" (UID: \"9e751c01e356f7e42fa77a4055cdcd2e\") " pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.673743 kubelet[2695]: I0213 20:09:41.673287 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e751c01e356f7e42fa77a4055cdcd2e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-c-c4549fc0d2\" (UID: \"9e751c01e356f7e42fa77a4055cdcd2e\") " pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.674183 kubelet[2695]: I0213 20:09:41.673372 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.674183 kubelet[2695]: I0213 20:09:41.673417 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.674183 kubelet[2695]: I0213 20:09:41.673524 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/677bd5166676674372e8dde0aec11596-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-c-c4549fc0d2\" (UID: \"677bd5166676674372e8dde0aec11596\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:41.674183 kubelet[2695]: I0213 20:09:41.673673 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f97fcf455b5d33e5424d7c07d958d2bb-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-c-c4549fc0d2\" (UID: \"f97fcf455b5d33e5424d7c07d958d2bb\") " pod="kube-system/kube-scheduler-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:42.342994 kubelet[2695]: I0213 20:09:42.342839 2695 apiserver.go:52] "Watching apiserver" Feb 13 20:09:42.373747 kubelet[2695]: I0213 20:09:42.373667 2695 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:09:42.440381 kubelet[2695]: I0213 20:09:42.440309 2695 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:42.457024 kubelet[2695]: E0213 20:09:42.456979 2695 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-1-c-c4549fc0d2\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" Feb 13 20:09:42.480872 kubelet[2695]: I0213 20:09:42.480677 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-1-c-c4549fc0d2" podStartSLOduration=1.4806572089999999 podStartE2EDuration="1.480657209s" podCreationTimestamp="2025-02-13 20:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:42.467941072 +0000 UTC m=+1.202701978" watchObservedRunningTime="2025-02-13 20:09:42.480657209 +0000 UTC m=+1.215418115" Feb 13 20:09:42.495007 kubelet[2695]: I0213 20:09:42.494857 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-1-c-c4549fc0d2" podStartSLOduration=1.494834913 podStartE2EDuration="1.494834913s" podCreationTimestamp="2025-02-13 20:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:42.482262616 +0000 UTC m=+1.217023522" watchObservedRunningTime="2025-02-13 20:09:42.494834913 +0000 UTC m=+1.229595779" Feb 13 20:09:42.513660 kubelet[2695]: I0213 20:09:42.513360 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-1-c-c4549fc0d2" podStartSLOduration=1.513302075 podStartE2EDuration="1.513302075s" podCreationTimestamp="2025-02-13 20:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:42.495288035 +0000 UTC m=+1.230048941" watchObservedRunningTime="2025-02-13 20:09:42.513302075 +0000 UTC m=+1.248062941" Feb 13 20:09:46.312954 kubelet[2695]: I0213 20:09:46.312912 2695 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:09:46.313785 containerd[1480]: time="2025-02-13T20:09:46.313522288Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:09:46.314189 kubelet[2695]: I0213 20:09:46.314162 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:09:46.995433 sudo[1867]: pam_unix(sudo:session): session closed for user root Feb 13 20:09:47.157732 sshd[1864]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:47.164204 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:09:47.165025 systemd[1]: sshd@6-78.47.136.246:22-147.75.109.163:32978.service: Deactivated successfully. Feb 13 20:09:47.168503 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:09:47.168922 systemd[1]: session-7.scope: Consumed 8.070s CPU time, 151.9M memory peak, 0B memory swap peak. Feb 13 20:09:47.170800 systemd-logind[1456]: Removed session 7. Feb 13 20:09:47.298564 systemd[1]: Created slice kubepods-besteffort-podb517af98_31d6_43c2_a3dc_03a28d5507af.slice - libcontainer container kubepods-besteffort-podb517af98_31d6_43c2_a3dc_03a28d5507af.slice. Feb 13 20:09:47.313910 kubelet[2695]: I0213 20:09:47.313852 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b517af98-31d6-43c2-a3dc-03a28d5507af-lib-modules\") pod \"kube-proxy-prr9k\" (UID: \"b517af98-31d6-43c2-a3dc-03a28d5507af\") " pod="kube-system/kube-proxy-prr9k" Feb 13 20:09:47.313910 kubelet[2695]: I0213 20:09:47.313904 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnwzd\" (UniqueName: \"kubernetes.io/projected/b517af98-31d6-43c2-a3dc-03a28d5507af-kube-api-access-mnwzd\") pod \"kube-proxy-prr9k\" (UID: \"b517af98-31d6-43c2-a3dc-03a28d5507af\") " pod="kube-system/kube-proxy-prr9k" Feb 13 20:09:47.313910 kubelet[2695]: I0213 20:09:47.313928 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b517af98-31d6-43c2-a3dc-03a28d5507af-xtables-lock\") pod \"kube-proxy-prr9k\" (UID: \"b517af98-31d6-43c2-a3dc-03a28d5507af\") " pod="kube-system/kube-proxy-prr9k" Feb 13 20:09:47.314413 kubelet[2695]: I0213 20:09:47.313944 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b517af98-31d6-43c2-a3dc-03a28d5507af-kube-proxy\") pod \"kube-proxy-prr9k\" (UID: \"b517af98-31d6-43c2-a3dc-03a28d5507af\") " pod="kube-system/kube-proxy-prr9k" Feb 13 20:09:47.485802 systemd[1]: Created slice kubepods-besteffort-pod8118c094_f403_49b8_9955_6a4103b99349.slice - libcontainer container kubepods-besteffort-pod8118c094_f403_49b8_9955_6a4103b99349.slice. Feb 13 20:09:47.515739 kubelet[2695]: I0213 20:09:47.515688 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8118c094-f403-49b8-9955-6a4103b99349-var-lib-calico\") pod \"tigera-operator-7d68577dc5-d52v6\" (UID: \"8118c094-f403-49b8-9955-6a4103b99349\") " pod="tigera-operator/tigera-operator-7d68577dc5-d52v6" Feb 13 20:09:47.515739 kubelet[2695]: I0213 20:09:47.515743 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5vq\" (UniqueName: \"kubernetes.io/projected/8118c094-f403-49b8-9955-6a4103b99349-kube-api-access-qq5vq\") pod \"tigera-operator-7d68577dc5-d52v6\" (UID: \"8118c094-f403-49b8-9955-6a4103b99349\") " pod="tigera-operator/tigera-operator-7d68577dc5-d52v6" Feb 13 20:09:47.608004 containerd[1480]: time="2025-02-13T20:09:47.607857157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prr9k,Uid:b517af98-31d6-43c2-a3dc-03a28d5507af,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:47.646581 containerd[1480]: time="2025-02-13T20:09:47.645978020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:47.646581 containerd[1480]: time="2025-02-13T20:09:47.646046861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:47.646581 containerd[1480]: time="2025-02-13T20:09:47.646058101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:47.646581 containerd[1480]: time="2025-02-13T20:09:47.646176861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:47.671785 systemd[1]: Started cri-containerd-7ff363265795ebd970b7c960060ea1606801b21312ac1d1b58d63876c0596d23.scope - libcontainer container 7ff363265795ebd970b7c960060ea1606801b21312ac1d1b58d63876c0596d23. Feb 13 20:09:47.697380 containerd[1480]: time="2025-02-13T20:09:47.697129719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prr9k,Uid:b517af98-31d6-43c2-a3dc-03a28d5507af,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ff363265795ebd970b7c960060ea1606801b21312ac1d1b58d63876c0596d23\"" Feb 13 20:09:47.701957 containerd[1480]: time="2025-02-13T20:09:47.701806786Z" level=info msg="CreateContainer within sandbox \"7ff363265795ebd970b7c960060ea1606801b21312ac1d1b58d63876c0596d23\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:09:47.723783 containerd[1480]: time="2025-02-13T20:09:47.723712834Z" level=info msg="CreateContainer within sandbox \"7ff363265795ebd970b7c960060ea1606801b21312ac1d1b58d63876c0596d23\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7124d167c5b302ad7fea3eeac14f6f1f2d825e9b303c2ccdb7be05a6e36da41\"" Feb 13 20:09:47.724886 containerd[1480]: time="2025-02-13T20:09:47.724845081Z" level=info msg="StartContainer for \"f7124d167c5b302ad7fea3eeac14f6f1f2d825e9b303c2ccdb7be05a6e36da41\"" Feb 13 20:09:47.760881 systemd[1]: Started cri-containerd-f7124d167c5b302ad7fea3eeac14f6f1f2d825e9b303c2ccdb7be05a6e36da41.scope - libcontainer container f7124d167c5b302ad7fea3eeac14f6f1f2d825e9b303c2ccdb7be05a6e36da41. Feb 13 20:09:47.793722 containerd[1480]: time="2025-02-13T20:09:47.793677003Z" level=info msg="StartContainer for \"f7124d167c5b302ad7fea3eeac14f6f1f2d825e9b303c2ccdb7be05a6e36da41\" returns successfully" Feb 13 20:09:47.793888 containerd[1480]: time="2025-02-13T20:09:47.793832964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-d52v6,Uid:8118c094-f403-49b8-9955-6a4103b99349,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:09:47.824588 containerd[1480]: time="2025-02-13T20:09:47.824052541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:47.824588 containerd[1480]: time="2025-02-13T20:09:47.824116581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:47.824588 containerd[1480]: time="2025-02-13T20:09:47.824133101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:47.824588 containerd[1480]: time="2025-02-13T20:09:47.824225222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:47.844798 systemd[1]: Started cri-containerd-12002a52c927eb5855dfee5bfc80e7142346be6f15b35f945c5d941daf9f4a0f.scope - libcontainer container 12002a52c927eb5855dfee5bfc80e7142346be6f15b35f945c5d941daf9f4a0f. Feb 13 20:09:47.895369 containerd[1480]: time="2025-02-13T20:09:47.895139556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-d52v6,Uid:8118c094-f403-49b8-9955-6a4103b99349,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"12002a52c927eb5855dfee5bfc80e7142346be6f15b35f945c5d941daf9f4a0f\"" Feb 13 20:09:47.898258 containerd[1480]: time="2025-02-13T20:09:47.898067413Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:09:48.474615 kubelet[2695]: I0213 20:09:48.474066 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prr9k" podStartSLOduration=1.474036696 podStartE2EDuration="1.474036696s" podCreationTimestamp="2025-02-13 20:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:48.473675214 +0000 UTC m=+7.208436160" watchObservedRunningTime="2025-02-13 20:09:48.474036696 +0000 UTC m=+7.208797642" Feb 13 20:09:51.620666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249736489.mount: Deactivated successfully. Feb 13 20:09:52.101560 containerd[1480]: time="2025-02-13T20:09:52.101479283Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:52.103664 containerd[1480]: time="2025-02-13T20:09:52.103253215Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 20:09:52.104644 containerd[1480]: time="2025-02-13T20:09:52.104606105Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:52.108647 containerd[1480]: time="2025-02-13T20:09:52.108525292Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:52.109903 containerd[1480]: time="2025-02-13T20:09:52.109850581Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 4.211731288s" Feb 13 20:09:52.109903 containerd[1480]: time="2025-02-13T20:09:52.109894062Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 20:09:52.114099 containerd[1480]: time="2025-02-13T20:09:52.113820609Z" level=info msg="CreateContainer within sandbox \"12002a52c927eb5855dfee5bfc80e7142346be6f15b35f945c5d941daf9f4a0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:09:52.132399 containerd[1480]: time="2025-02-13T20:09:52.132346659Z" level=info msg="CreateContainer within sandbox \"12002a52c927eb5855dfee5bfc80e7142346be6f15b35f945c5d941daf9f4a0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7fe121f8b762d078d375d10b437e2986c43467342909962745af479378ce24d4\"" Feb 13 20:09:52.134279 containerd[1480]: time="2025-02-13T20:09:52.134241272Z" level=info msg="StartContainer for \"7fe121f8b762d078d375d10b437e2986c43467342909962745af479378ce24d4\"" Feb 13 20:09:52.159778 systemd[1]: Started cri-containerd-7fe121f8b762d078d375d10b437e2986c43467342909962745af479378ce24d4.scope - libcontainer container 7fe121f8b762d078d375d10b437e2986c43467342909962745af479378ce24d4. Feb 13 20:09:52.189168 containerd[1480]: time="2025-02-13T20:09:52.189022776Z" level=info msg="StartContainer for \"7fe121f8b762d078d375d10b437e2986c43467342909962745af479378ce24d4\" returns successfully" Feb 13 20:09:53.954225 kubelet[2695]: I0213 20:09:53.954138 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-d52v6" podStartSLOduration=2.740189112 podStartE2EDuration="6.954119895s" podCreationTimestamp="2025-02-13 20:09:47 +0000 UTC" firstStartedPulling="2025-02-13 20:09:47.89751821 +0000 UTC m=+6.632279116" lastFinishedPulling="2025-02-13 20:09:52.111448993 +0000 UTC m=+10.846209899" observedRunningTime="2025-02-13 20:09:52.486218417 +0000 UTC m=+11.220979403" watchObservedRunningTime="2025-02-13 20:09:53.954119895 +0000 UTC m=+12.688880801" Feb 13 20:09:57.279063 kubelet[2695]: W0213 20:09:57.278728 2695 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-1-c-c4549fc0d2" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-c-c4549fc0d2' and this object Feb 13 20:09:57.279063 kubelet[2695]: E0213 20:09:57.278785 2695 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081-3-1-c-c4549fc0d2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-1-c-c4549fc0d2' and this object" logger="UnhandledError" Feb 13 20:09:57.279063 kubelet[2695]: I0213 20:09:57.279006 2695 status_manager.go:890] "Failed to get status for pod" podUID="f615c347-0a72-490c-b342-ed90acaf3568" pod="calico-system/calico-typha-776f9f5d-r2f55" err="pods \"calico-typha-776f9f5d-r2f55\" is forbidden: User \"system:node:ci-4081-3-1-c-c4549fc0d2\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-1-c-c4549fc0d2' and this object" Feb 13 20:09:57.279568 kubelet[2695]: W0213 20:09:57.279214 2695 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081-3-1-c-c4549fc0d2" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-c-c4549fc0d2' and this object Feb 13 20:09:57.279568 kubelet[2695]: E0213 20:09:57.279238 2695 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-1-c-c4549fc0d2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-1-c-c4549fc0d2' and this object" logger="UnhandledError" Feb 13 20:09:57.279776 kubelet[2695]: W0213 20:09:57.279759 2695 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-1-c-c4549fc0d2" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-c-c4549fc0d2' and this object Feb 13 20:09:57.279809 kubelet[2695]: E0213 20:09:57.279780 2695 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-1-c-c4549fc0d2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-1-c-c4549fc0d2' and this object" logger="UnhandledError" Feb 13 20:09:57.284962 systemd[1]: Created slice kubepods-besteffort-podf615c347_0a72_490c_b342_ed90acaf3568.slice - libcontainer container kubepods-besteffort-podf615c347_0a72_490c_b342_ed90acaf3568.slice. Feb 13 20:09:57.388419 kubelet[2695]: I0213 20:09:57.388312 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f615c347-0a72-490c-b342-ed90acaf3568-typha-certs\") pod \"calico-typha-776f9f5d-r2f55\" (UID: \"f615c347-0a72-490c-b342-ed90acaf3568\") " pod="calico-system/calico-typha-776f9f5d-r2f55" Feb 13 20:09:57.389060 kubelet[2695]: I0213 20:09:57.388865 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f615c347-0a72-490c-b342-ed90acaf3568-tigera-ca-bundle\") pod \"calico-typha-776f9f5d-r2f55\" (UID: \"f615c347-0a72-490c-b342-ed90acaf3568\") " pod="calico-system/calico-typha-776f9f5d-r2f55" Feb 13 20:09:57.389060 kubelet[2695]: I0213 20:09:57.388924 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvw9j\" (UniqueName: \"kubernetes.io/projected/f615c347-0a72-490c-b342-ed90acaf3568-kube-api-access-hvw9j\") pod \"calico-typha-776f9f5d-r2f55\" (UID: \"f615c347-0a72-490c-b342-ed90acaf3568\") " pod="calico-system/calico-typha-776f9f5d-r2f55" Feb 13 20:09:57.559612 systemd[1]: Created slice kubepods-besteffort-pod9ef0a551_86fb_476d_9ffd_2a0897905df4.slice - libcontainer container kubepods-besteffort-pod9ef0a551_86fb_476d_9ffd_2a0897905df4.slice. Feb 13 20:09:57.592646 kubelet[2695]: I0213 20:09:57.592598 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-var-run-calico\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592777 kubelet[2695]: I0213 20:09:57.592688 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8h6d\" (UniqueName: \"kubernetes.io/projected/9ef0a551-86fb-476d-9ffd-2a0897905df4-kube-api-access-d8h6d\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592777 kubelet[2695]: I0213 20:09:57.592717 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-xtables-lock\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592777 kubelet[2695]: I0213 20:09:57.592759 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-policysync\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592877 kubelet[2695]: I0213 20:09:57.592778 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef0a551-86fb-476d-9ffd-2a0897905df4-tigera-ca-bundle\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592877 kubelet[2695]: I0213 20:09:57.592816 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-lib-modules\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592877 kubelet[2695]: I0213 20:09:57.592834 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-cni-net-dir\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592946 kubelet[2695]: I0213 20:09:57.592854 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-cni-log-dir\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592946 kubelet[2695]: I0213 20:09:57.592909 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-var-lib-calico\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.592946 kubelet[2695]: I0213 20:09:57.592924 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-cni-bin-dir\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.593004 kubelet[2695]: I0213 20:09:57.592965 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9ef0a551-86fb-476d-9ffd-2a0897905df4-flexvol-driver-host\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.593004 kubelet[2695]: I0213 20:09:57.592982 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9ef0a551-86fb-476d-9ffd-2a0897905df4-node-certs\") pod \"calico-node-8gmvh\" (UID: \"9ef0a551-86fb-476d-9ffd-2a0897905df4\") " pod="calico-system/calico-node-8gmvh" Feb 13 20:09:57.696191 kubelet[2695]: E0213 20:09:57.696157 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.696596 kubelet[2695]: W0213 20:09:57.696417 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.696596 kubelet[2695]: E0213 20:09:57.696473 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.697235 kubelet[2695]: E0213 20:09:57.696838 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.697235 kubelet[2695]: W0213 20:09:57.696852 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.697235 kubelet[2695]: E0213 20:09:57.696867 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.697624 kubelet[2695]: E0213 20:09:57.697608 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.699723 kubelet[2695]: W0213 20:09:57.699695 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.699834 kubelet[2695]: E0213 20:09:57.699817 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.700227 kubelet[2695]: E0213 20:09:57.700210 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.703192 kubelet[2695]: W0213 20:09:57.703159 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.703654 kubelet[2695]: E0213 20:09:57.703501 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.704669 kubelet[2695]: E0213 20:09:57.704239 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.704669 kubelet[2695]: W0213 20:09:57.704263 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.704669 kubelet[2695]: E0213 20:09:57.704432 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.706139 kubelet[2695]: E0213 20:09:57.705983 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.706139 kubelet[2695]: W0213 20:09:57.706001 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.706800 kubelet[2695]: E0213 20:09:57.706280 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.706800 kubelet[2695]: E0213 20:09:57.707320 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.706800 kubelet[2695]: W0213 20:09:57.707338 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.708128 kubelet[2695]: E0213 20:09:57.708038 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.708128 kubelet[2695]: W0213 20:09:57.708061 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.708320 kubelet[2695]: E0213 20:09:57.708255 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.708320 kubelet[2695]: W0213 20:09:57.708265 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.708320 kubelet[2695]: E0213 20:09:57.708280 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.711033 kubelet[2695]: E0213 20:09:57.710912 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.711033 kubelet[2695]: E0213 20:09:57.710993 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.711713 kubelet[2695]: E0213 20:09:57.711683 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.711713 kubelet[2695]: W0213 20:09:57.711708 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.711799 kubelet[2695]: E0213 20:09:57.711735 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.713206 kubelet[2695]: E0213 20:09:57.713165 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.713206 kubelet[2695]: W0213 20:09:57.713193 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.713334 kubelet[2695]: E0213 20:09:57.713217 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.714276 kubelet[2695]: E0213 20:09:57.713876 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.714276 kubelet[2695]: W0213 20:09:57.713896 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.714276 kubelet[2695]: E0213 20:09:57.713911 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.715772 kubelet[2695]: E0213 20:09:57.715114 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.715772 kubelet[2695]: W0213 20:09:57.715133 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.715772 kubelet[2695]: E0213 20:09:57.715158 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.761634 kubelet[2695]: E0213 20:09:57.761492 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:09:57.774715 kubelet[2695]: E0213 20:09:57.774679 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.774715 kubelet[2695]: W0213 20:09:57.774704 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.775017 kubelet[2695]: E0213 20:09:57.774728 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.776642 kubelet[2695]: E0213 20:09:57.775472 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.776747 kubelet[2695]: W0213 20:09:57.776634 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.776747 kubelet[2695]: E0213 20:09:57.776710 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.777086 kubelet[2695]: E0213 20:09:57.777062 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.777086 kubelet[2695]: W0213 20:09:57.777079 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.777086 kubelet[2695]: E0213 20:09:57.777091 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.777407 kubelet[2695]: E0213 20:09:57.777315 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.777407 kubelet[2695]: W0213 20:09:57.777326 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.777407 kubelet[2695]: E0213 20:09:57.777342 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.777731 kubelet[2695]: E0213 20:09:57.777670 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.777731 kubelet[2695]: W0213 20:09:57.777699 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.777731 kubelet[2695]: E0213 20:09:57.777711 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.778005 kubelet[2695]: E0213 20:09:57.777944 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.778005 kubelet[2695]: W0213 20:09:57.777955 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.778005 kubelet[2695]: E0213 20:09:57.777965 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.778713 kubelet[2695]: E0213 20:09:57.778577 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.778713 kubelet[2695]: W0213 20:09:57.778600 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.778713 kubelet[2695]: E0213 20:09:57.778614 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.779221 kubelet[2695]: E0213 20:09:57.779143 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.779221 kubelet[2695]: W0213 20:09:57.779167 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.779221 kubelet[2695]: E0213 20:09:57.779179 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.779767 kubelet[2695]: E0213 20:09:57.779693 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.779767 kubelet[2695]: W0213 20:09:57.779706 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.779907 kubelet[2695]: E0213 20:09:57.779717 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.780323 kubelet[2695]: E0213 20:09:57.780301 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.780323 kubelet[2695]: W0213 20:09:57.780319 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.780431 kubelet[2695]: E0213 20:09:57.780332 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.780868 kubelet[2695]: E0213 20:09:57.780852 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.780942 kubelet[2695]: W0213 20:09:57.780867 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.780942 kubelet[2695]: E0213 20:09:57.780884 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.781486 kubelet[2695]: E0213 20:09:57.781368 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.781486 kubelet[2695]: W0213 20:09:57.781486 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.781615 kubelet[2695]: E0213 20:09:57.781500 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.781893 kubelet[2695]: E0213 20:09:57.781878 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.781893 kubelet[2695]: W0213 20:09:57.781892 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.782011 kubelet[2695]: E0213 20:09:57.781904 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.782326 kubelet[2695]: E0213 20:09:57.782311 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.782435 kubelet[2695]: W0213 20:09:57.782326 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.783086 kubelet[2695]: E0213 20:09:57.782618 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.783569 kubelet[2695]: E0213 20:09:57.783471 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.783569 kubelet[2695]: W0213 20:09:57.783502 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.783569 kubelet[2695]: E0213 20:09:57.783519 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.784182 kubelet[2695]: E0213 20:09:57.784038 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.784182 kubelet[2695]: W0213 20:09:57.784055 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.784182 kubelet[2695]: E0213 20:09:57.784069 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.784666 kubelet[2695]: E0213 20:09:57.784463 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.784666 kubelet[2695]: W0213 20:09:57.784588 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.784666 kubelet[2695]: E0213 20:09:57.784608 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.785478 kubelet[2695]: E0213 20:09:57.785452 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.785786 kubelet[2695]: W0213 20:09:57.785640 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.785786 kubelet[2695]: E0213 20:09:57.785671 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.786079 kubelet[2695]: E0213 20:09:57.786062 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.786232 kubelet[2695]: W0213 20:09:57.786127 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.786232 kubelet[2695]: E0213 20:09:57.786145 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.786590 kubelet[2695]: E0213 20:09:57.786532 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.786778 kubelet[2695]: W0213 20:09:57.786619 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.786778 kubelet[2695]: E0213 20:09:57.786635 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.796009 kubelet[2695]: E0213 20:09:57.795887 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.796514 kubelet[2695]: W0213 20:09:57.796283 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.796514 kubelet[2695]: E0213 20:09:57.796314 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.798241 kubelet[2695]: I0213 20:09:57.797680 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e49c3cb5-faf1-41f3-bccd-16c39f19a201-registration-dir\") pod \"csi-node-driver-dfbrs\" (UID: \"e49c3cb5-faf1-41f3-bccd-16c39f19a201\") " pod="calico-system/csi-node-driver-dfbrs" Feb 13 20:09:57.798957 kubelet[2695]: E0213 20:09:57.798095 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.798957 kubelet[2695]: W0213 20:09:57.798577 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.798957 kubelet[2695]: E0213 20:09:57.798895 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.800249 kubelet[2695]: E0213 20:09:57.800123 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.800249 kubelet[2695]: W0213 20:09:57.800149 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.800249 kubelet[2695]: E0213 20:09:57.800194 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.800533 kubelet[2695]: E0213 20:09:57.800508 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.800612 kubelet[2695]: W0213 20:09:57.800529 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.800647 kubelet[2695]: E0213 20:09:57.800611 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.800669 kubelet[2695]: I0213 20:09:57.800644 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tctzt\" (UniqueName: \"kubernetes.io/projected/e49c3cb5-faf1-41f3-bccd-16c39f19a201-kube-api-access-tctzt\") pod \"csi-node-driver-dfbrs\" (UID: \"e49c3cb5-faf1-41f3-bccd-16c39f19a201\") " pod="calico-system/csi-node-driver-dfbrs" Feb 13 20:09:57.801149 kubelet[2695]: E0213 20:09:57.801130 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.801149 kubelet[2695]: W0213 20:09:57.801148 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.801332 kubelet[2695]: E0213 20:09:57.801171 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.801332 kubelet[2695]: I0213 20:09:57.801190 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e49c3cb5-faf1-41f3-bccd-16c39f19a201-varrun\") pod \"csi-node-driver-dfbrs\" (UID: \"e49c3cb5-faf1-41f3-bccd-16c39f19a201\") " pod="calico-system/csi-node-driver-dfbrs" Feb 13 20:09:57.801619 kubelet[2695]: E0213 20:09:57.801438 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.801619 kubelet[2695]: W0213 20:09:57.801453 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.801619 kubelet[2695]: E0213 20:09:57.801469 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.801619 kubelet[2695]: I0213 20:09:57.801491 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e49c3cb5-faf1-41f3-bccd-16c39f19a201-kubelet-dir\") pod \"csi-node-driver-dfbrs\" (UID: \"e49c3cb5-faf1-41f3-bccd-16c39f19a201\") " pod="calico-system/csi-node-driver-dfbrs" Feb 13 20:09:57.802934 kubelet[2695]: E0213 20:09:57.802799 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.802934 kubelet[2695]: W0213 20:09:57.802826 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.802934 kubelet[2695]: E0213 20:09:57.802857 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.803215 kubelet[2695]: E0213 20:09:57.803186 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.803215 kubelet[2695]: W0213 20:09:57.803204 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.803310 kubelet[2695]: E0213 20:09:57.803288 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.803617 kubelet[2695]: E0213 20:09:57.803600 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.803681 kubelet[2695]: W0213 20:09:57.803645 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.803752 kubelet[2695]: E0213 20:09:57.803734 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.804056 kubelet[2695]: E0213 20:09:57.804029 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.804056 kubelet[2695]: W0213 20:09:57.804047 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.804323 kubelet[2695]: E0213 20:09:57.804196 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.804323 kubelet[2695]: I0213 20:09:57.804233 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e49c3cb5-faf1-41f3-bccd-16c39f19a201-socket-dir\") pod \"csi-node-driver-dfbrs\" (UID: \"e49c3cb5-faf1-41f3-bccd-16c39f19a201\") " pod="calico-system/csi-node-driver-dfbrs" Feb 13 20:09:57.804464 kubelet[2695]: E0213 20:09:57.804443 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.804464 kubelet[2695]: W0213 20:09:57.804459 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.804604 kubelet[2695]: E0213 20:09:57.804569 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.804779 kubelet[2695]: E0213 20:09:57.804666 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.804779 kubelet[2695]: W0213 20:09:57.804678 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.804779 kubelet[2695]: E0213 20:09:57.804687 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.805238 kubelet[2695]: E0213 20:09:57.805133 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.805238 kubelet[2695]: W0213 20:09:57.805153 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.805238 kubelet[2695]: E0213 20:09:57.805168 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.805396 kubelet[2695]: E0213 20:09:57.805336 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.805396 kubelet[2695]: W0213 20:09:57.805363 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.805396 kubelet[2695]: E0213 20:09:57.805374 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.805815 kubelet[2695]: E0213 20:09:57.805581 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.805815 kubelet[2695]: W0213 20:09:57.805594 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.805815 kubelet[2695]: E0213 20:09:57.805604 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.906402 kubelet[2695]: E0213 20:09:57.905699 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.906402 kubelet[2695]: W0213 20:09:57.905922 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.906402 kubelet[2695]: E0213 20:09:57.905951 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.907942 kubelet[2695]: E0213 20:09:57.907826 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.907942 kubelet[2695]: W0213 20:09:57.907850 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.907942 kubelet[2695]: E0213 20:09:57.907880 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.909450 kubelet[2695]: E0213 20:09:57.909221 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.909450 kubelet[2695]: W0213 20:09:57.909252 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.909450 kubelet[2695]: E0213 20:09:57.909284 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.909916 kubelet[2695]: E0213 20:09:57.909602 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.909916 kubelet[2695]: W0213 20:09:57.909615 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.909916 kubelet[2695]: E0213 20:09:57.909707 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.910360 kubelet[2695]: E0213 20:09:57.910244 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.910360 kubelet[2695]: W0213 20:09:57.910260 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.911721 kubelet[2695]: E0213 20:09:57.910399 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.912417 kubelet[2695]: E0213 20:09:57.912375 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.912417 kubelet[2695]: W0213 20:09:57.912399 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.912623 kubelet[2695]: E0213 20:09:57.912588 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.913561 kubelet[2695]: E0213 20:09:57.912652 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.913561 kubelet[2695]: W0213 20:09:57.912659 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.914254 kubelet[2695]: E0213 20:09:57.914220 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.914254 kubelet[2695]: W0213 20:09:57.914242 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.914761 kubelet[2695]: E0213 20:09:57.914617 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.914761 kubelet[2695]: E0213 20:09:57.914690 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.915457 kubelet[2695]: E0213 20:09:57.915291 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.915457 kubelet[2695]: W0213 20:09:57.915355 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.916148 kubelet[2695]: E0213 20:09:57.915588 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.916148 kubelet[2695]: W0213 20:09:57.915601 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.916148 kubelet[2695]: E0213 20:09:57.915741 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.916148 kubelet[2695]: E0213 20:09:57.915765 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.916148 kubelet[2695]: W0213 20:09:57.915773 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.916148 kubelet[2695]: E0213 20:09:57.915784 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.916148 kubelet[2695]: E0213 20:09:57.915766 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.917093 kubelet[2695]: E0213 20:09:57.916983 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.917093 kubelet[2695]: W0213 20:09:57.917033 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.917093 kubelet[2695]: E0213 20:09:57.917059 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.917328 kubelet[2695]: E0213 20:09:57.917288 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.917328 kubelet[2695]: W0213 20:09:57.917302 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.917565 kubelet[2695]: E0213 20:09:57.917325 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.917692 kubelet[2695]: E0213 20:09:57.917629 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.917692 kubelet[2695]: W0213 20:09:57.917643 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.917692 kubelet[2695]: E0213 20:09:57.917659 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.917892 kubelet[2695]: E0213 20:09:57.917844 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.917892 kubelet[2695]: W0213 20:09:57.917859 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.918043 kubelet[2695]: E0213 20:09:57.917979 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.918043 kubelet[2695]: E0213 20:09:57.918003 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.918125 kubelet[2695]: W0213 20:09:57.918057 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.918282 kubelet[2695]: E0213 20:09:57.918206 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.918474 kubelet[2695]: E0213 20:09:57.918430 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.918474 kubelet[2695]: W0213 20:09:57.918450 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.918474 kubelet[2695]: E0213 20:09:57.918467 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.918798 kubelet[2695]: E0213 20:09:57.918779 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.918798 kubelet[2695]: W0213 20:09:57.918797 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.919157 kubelet[2695]: E0213 20:09:57.918814 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.919157 kubelet[2695]: E0213 20:09:57.919101 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.919340 kubelet[2695]: W0213 20:09:57.919308 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.919576 kubelet[2695]: E0213 20:09:57.919469 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.919847 kubelet[2695]: E0213 20:09:57.919802 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.919847 kubelet[2695]: W0213 20:09:57.919817 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.919936 kubelet[2695]: E0213 20:09:57.919857 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.920390 kubelet[2695]: E0213 20:09:57.920368 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.920390 kubelet[2695]: W0213 20:09:57.920388 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.920484 kubelet[2695]: E0213 20:09:57.920420 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.920725 kubelet[2695]: E0213 20:09:57.920708 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.920725 kubelet[2695]: W0213 20:09:57.920723 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.920877 kubelet[2695]: E0213 20:09:57.920840 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.921410 kubelet[2695]: E0213 20:09:57.921379 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.921410 kubelet[2695]: W0213 20:09:57.921400 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.921494 kubelet[2695]: E0213 20:09:57.921434 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.921702 kubelet[2695]: E0213 20:09:57.921680 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.921702 kubelet[2695]: W0213 20:09:57.921690 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.921702 kubelet[2695]: E0213 20:09:57.921700 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:57.922141 kubelet[2695]: E0213 20:09:57.922091 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:57.922141 kubelet[2695]: W0213 20:09:57.922105 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:57.922141 kubelet[2695]: E0213 20:09:57.922118 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.491695 kubelet[2695]: E0213 20:09:58.490790 2695 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:09:58.491695 kubelet[2695]: E0213 20:09:58.490904 2695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f615c347-0a72-490c-b342-ed90acaf3568-tigera-ca-bundle podName:f615c347-0a72-490c-b342-ed90acaf3568 nodeName:}" failed. No retries permitted until 2025-02-13 20:09:58.990878218 +0000 UTC m=+17.725639124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/f615c347-0a72-490c-b342-ed90acaf3568-tigera-ca-bundle") pod "calico-typha-776f9f5d-r2f55" (UID: "f615c347-0a72-490c-b342-ed90acaf3568") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:09:58.492871 kubelet[2695]: E0213 20:09:58.492176 2695 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 20:09:58.492871 kubelet[2695]: E0213 20:09:58.492244 2695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f615c347-0a72-490c-b342-ed90acaf3568-typha-certs podName:f615c347-0a72-490c-b342-ed90acaf3568 nodeName:}" failed. No retries permitted until 2025-02-13 20:09:58.992227109 +0000 UTC m=+17.726987975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/f615c347-0a72-490c-b342-ed90acaf3568-typha-certs") pod "calico-typha-776f9f5d-r2f55" (UID: "f615c347-0a72-490c-b342-ed90acaf3568") : failed to sync secret cache: timed out waiting for the condition Feb 13 20:09:58.500138 kubelet[2695]: E0213 20:09:58.499960 2695 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:09:58.500138 kubelet[2695]: E0213 20:09:58.500124 2695 projected.go:194] Error preparing data for projected volume kube-api-access-hvw9j for pod calico-system/calico-typha-776f9f5d-r2f55: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:09:58.500306 kubelet[2695]: E0213 20:09:58.500236 2695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f615c347-0a72-490c-b342-ed90acaf3568-kube-api-access-hvw9j podName:f615c347-0a72-490c-b342-ed90acaf3568 nodeName:}" failed. No retries permitted until 2025-02-13 20:09:59.000212934 +0000 UTC m=+17.734973840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hvw9j" (UniqueName: "kubernetes.io/projected/f615c347-0a72-490c-b342-ed90acaf3568-kube-api-access-hvw9j") pod "calico-typha-776f9f5d-r2f55" (UID: "f615c347-0a72-490c-b342-ed90acaf3568") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:09:58.529108 kubelet[2695]: E0213 20:09:58.528768 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.529108 kubelet[2695]: W0213 20:09:58.528799 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.529108 kubelet[2695]: E0213 20:09:58.528822 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.530141 kubelet[2695]: E0213 20:09:58.530115 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.531572 kubelet[2695]: W0213 20:09:58.530298 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.531572 kubelet[2695]: E0213 20:09:58.530341 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.532130 kubelet[2695]: E0213 20:09:58.532010 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.532856 kubelet[2695]: W0213 20:09:58.532824 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.535660 kubelet[2695]: E0213 20:09:58.535420 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.536598 kubelet[2695]: E0213 20:09:58.536343 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.536598 kubelet[2695]: W0213 20:09:58.536417 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.536598 kubelet[2695]: E0213 20:09:58.536438 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.537004 kubelet[2695]: E0213 20:09:58.536973 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.537204 kubelet[2695]: W0213 20:09:58.537090 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.537204 kubelet[2695]: E0213 20:09:58.537108 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.608041 kubelet[2695]: E0213 20:09:58.608008 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.608262 kubelet[2695]: W0213 20:09:58.608198 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.608262 kubelet[2695]: E0213 20:09:58.608226 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.636901 kubelet[2695]: E0213 20:09:58.636854 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.636901 kubelet[2695]: W0213 20:09:58.636894 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.637587 kubelet[2695]: E0213 20:09:58.636928 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.637587 kubelet[2695]: E0213 20:09:58.637388 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.637587 kubelet[2695]: W0213 20:09:58.637408 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.637587 kubelet[2695]: E0213 20:09:58.637432 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.637946 kubelet[2695]: E0213 20:09:58.637807 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.637946 kubelet[2695]: W0213 20:09:58.637823 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.637946 kubelet[2695]: E0213 20:09:58.637843 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.739053 kubelet[2695]: E0213 20:09:58.738954 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.739053 kubelet[2695]: W0213 20:09:58.738985 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.739053 kubelet[2695]: E0213 20:09:58.739010 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.739584 kubelet[2695]: E0213 20:09:58.739564 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.739922 kubelet[2695]: W0213 20:09:58.739677 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.739922 kubelet[2695]: E0213 20:09:58.739770 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.740176 kubelet[2695]: E0213 20:09:58.740155 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.740377 kubelet[2695]: W0213 20:09:58.740253 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.740377 kubelet[2695]: E0213 20:09:58.740277 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.768178 containerd[1480]: time="2025-02-13T20:09:58.767114754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8gmvh,Uid:9ef0a551-86fb-476d-9ffd-2a0897905df4,Namespace:calico-system,Attempt:0,}" Feb 13 20:09:58.810145 containerd[1480]: time="2025-02-13T20:09:58.809219938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:58.810145 containerd[1480]: time="2025-02-13T20:09:58.809934184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:58.810145 containerd[1480]: time="2025-02-13T20:09:58.809951504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:58.813725 containerd[1480]: time="2025-02-13T20:09:58.810072905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:58.831921 systemd[1]: run-containerd-runc-k8s.io-a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25-runc.tLWEwt.mount: Deactivated successfully. Feb 13 20:09:58.839949 systemd[1]: Started cri-containerd-a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25.scope - libcontainer container a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25. Feb 13 20:09:58.843084 kubelet[2695]: E0213 20:09:58.842922 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.843084 kubelet[2695]: W0213 20:09:58.842944 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.843084 kubelet[2695]: E0213 20:09:58.842980 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.843952 kubelet[2695]: E0213 20:09:58.843675 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.843952 kubelet[2695]: W0213 20:09:58.843697 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.843952 kubelet[2695]: E0213 20:09:58.843724 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.844528 kubelet[2695]: E0213 20:09:58.844100 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.844528 kubelet[2695]: W0213 20:09:58.844113 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.844528 kubelet[2695]: E0213 20:09:58.844139 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.883761 containerd[1480]: time="2025-02-13T20:09:58.883718947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8gmvh,Uid:9ef0a551-86fb-476d-9ffd-2a0897905df4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25\"" Feb 13 20:09:58.886864 containerd[1480]: time="2025-02-13T20:09:58.886810452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:09:58.945652 kubelet[2695]: E0213 20:09:58.945517 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.945652 kubelet[2695]: W0213 20:09:58.945562 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.945652 kubelet[2695]: E0213 20:09:58.945588 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.946420 kubelet[2695]: E0213 20:09:58.946153 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.946420 kubelet[2695]: W0213 20:09:58.946169 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.946420 kubelet[2695]: E0213 20:09:58.946183 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:58.946979 kubelet[2695]: E0213 20:09:58.946892 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:58.946979 kubelet[2695]: W0213 20:09:58.946908 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:58.946979 kubelet[2695]: E0213 20:09:58.946921 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.048787 kubelet[2695]: E0213 20:09:59.048518 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.048787 kubelet[2695]: W0213 20:09:59.048621 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.048787 kubelet[2695]: E0213 20:09:59.048650 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.052203 kubelet[2695]: E0213 20:09:59.049526 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.052203 kubelet[2695]: W0213 20:09:59.049573 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.052203 kubelet[2695]: E0213 20:09:59.049591 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.052203 kubelet[2695]: E0213 20:09:59.050030 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.052203 kubelet[2695]: W0213 20:09:59.050047 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.052203 kubelet[2695]: E0213 20:09:59.050061 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.052203 kubelet[2695]: E0213 20:09:59.050280 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.052203 kubelet[2695]: W0213 20:09:59.050298 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.052203 kubelet[2695]: E0213 20:09:59.050310 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.052203 kubelet[2695]: E0213 20:09:59.050510 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.052708 kubelet[2695]: W0213 20:09:59.050528 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.052708 kubelet[2695]: E0213 20:09:59.050570 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.052708 kubelet[2695]: E0213 20:09:59.050770 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.052708 kubelet[2695]: W0213 20:09:59.050779 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.052708 kubelet[2695]: E0213 20:09:59.050790 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.053130 kubelet[2695]: E0213 20:09:59.052928 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.053130 kubelet[2695]: W0213 20:09:59.052942 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.053130 kubelet[2695]: E0213 20:09:59.052961 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.053238 kubelet[2695]: E0213 20:09:59.053143 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.053238 kubelet[2695]: W0213 20:09:59.053152 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.053238 kubelet[2695]: E0213 20:09:59.053168 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.053424 kubelet[2695]: E0213 20:09:59.053412 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.053424 kubelet[2695]: W0213 20:09:59.053424 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.053671 kubelet[2695]: E0213 20:09:59.053441 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.053671 kubelet[2695]: E0213 20:09:59.053613 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.053671 kubelet[2695]: W0213 20:09:59.053621 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.053671 kubelet[2695]: E0213 20:09:59.053637 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.053934 kubelet[2695]: E0213 20:09:59.053774 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.053934 kubelet[2695]: W0213 20:09:59.053781 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.053934 kubelet[2695]: E0213 20:09:59.053792 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.054013 kubelet[2695]: E0213 20:09:59.053958 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.054013 kubelet[2695]: W0213 20:09:59.053966 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.054013 kubelet[2695]: E0213 20:09:59.053982 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.055576 kubelet[2695]: E0213 20:09:59.054474 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.055576 kubelet[2695]: W0213 20:09:59.054493 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.055576 kubelet[2695]: E0213 20:09:59.054508 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.055576 kubelet[2695]: E0213 20:09:59.054712 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.055576 kubelet[2695]: W0213 20:09:59.054721 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.055576 kubelet[2695]: E0213 20:09:59.054730 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.055576 kubelet[2695]: E0213 20:09:59.054862 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.055576 kubelet[2695]: W0213 20:09:59.054869 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.055576 kubelet[2695]: E0213 20:09:59.054876 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.055576 kubelet[2695]: E0213 20:09:59.055013 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.055841 kubelet[2695]: W0213 20:09:59.055019 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.055841 kubelet[2695]: E0213 20:09:59.055027 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.061916 kubelet[2695]: E0213 20:09:59.060781 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.061916 kubelet[2695]: W0213 20:09:59.061765 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.061916 kubelet[2695]: E0213 20:09:59.061797 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.064375 kubelet[2695]: E0213 20:09:59.064337 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:59.064375 kubelet[2695]: W0213 20:09:59.064365 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:59.064582 kubelet[2695]: E0213 20:09:59.064385 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:59.092183 containerd[1480]: time="2025-02-13T20:09:59.091796422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-776f9f5d-r2f55,Uid:f615c347-0a72-490c-b342-ed90acaf3568,Namespace:calico-system,Attempt:0,}" Feb 13 20:09:59.115496 containerd[1480]: time="2025-02-13T20:09:59.115172977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:59.115496 containerd[1480]: time="2025-02-13T20:09:59.115255378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:59.115496 containerd[1480]: time="2025-02-13T20:09:59.115279658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:59.115496 containerd[1480]: time="2025-02-13T20:09:59.115401059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:59.138855 systemd[1]: Started cri-containerd-10ac8154aca2946e728416461820cc63143a5aef7de93a51751f32052b1dea42.scope - libcontainer container 10ac8154aca2946e728416461820cc63143a5aef7de93a51751f32052b1dea42. Feb 13 20:09:59.175442 containerd[1480]: time="2025-02-13T20:09:59.175226998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-776f9f5d-r2f55,Uid:f615c347-0a72-490c-b342-ed90acaf3568,Namespace:calico-system,Attempt:0,} returns sandbox id \"10ac8154aca2946e728416461820cc63143a5aef7de93a51751f32052b1dea42\"" Feb 13 20:09:59.397946 kubelet[2695]: E0213 20:09:59.397676 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:10:00.586400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134085356.mount: Deactivated successfully. Feb 13 20:10:00.690936 containerd[1480]: time="2025-02-13T20:10:00.690033112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:00.691832 containerd[1480]: time="2025-02-13T20:10:00.691794207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 20:10:00.694071 containerd[1480]: time="2025-02-13T20:10:00.694006066Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:00.697291 containerd[1480]: time="2025-02-13T20:10:00.697237574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:00.698348 containerd[1480]: time="2025-02-13T20:10:00.698196422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.811232929s" Feb 13 20:10:00.698348 containerd[1480]: time="2025-02-13T20:10:00.698244182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 20:10:00.700024 containerd[1480]: time="2025-02-13T20:10:00.699864316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:10:00.701904 containerd[1480]: time="2025-02-13T20:10:00.701808813Z" level=info msg="CreateContainer within sandbox \"a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:10:00.722589 containerd[1480]: time="2025-02-13T20:10:00.722510709Z" level=info msg="CreateContainer within sandbox \"a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209\"" Feb 13 20:10:00.724869 containerd[1480]: time="2025-02-13T20:10:00.723417317Z" level=info msg="StartContainer for \"4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209\"" Feb 13 20:10:00.762772 systemd[1]: Started cri-containerd-4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209.scope - libcontainer container 4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209. Feb 13 20:10:00.816216 containerd[1480]: time="2025-02-13T20:10:00.816163346Z" level=info msg="StartContainer for \"4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209\" returns successfully" Feb 13 20:10:00.834508 systemd[1]: cri-containerd-4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209.scope: Deactivated successfully. Feb 13 20:10:00.875130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209-rootfs.mount: Deactivated successfully. Feb 13 20:10:00.981215 containerd[1480]: time="2025-02-13T20:10:00.981120310Z" level=info msg="shim disconnected" id=4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209 namespace=k8s.io Feb 13 20:10:00.981215 containerd[1480]: time="2025-02-13T20:10:00.981207831Z" level=warning msg="cleaning up after shim disconnected" id=4f3965305757da543129f715dd4b08ad6b949a84c644c4fa38ea8581de7c5209 namespace=k8s.io Feb 13 20:10:00.981602 containerd[1480]: time="2025-02-13T20:10:00.981226831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:01.399792 kubelet[2695]: E0213 20:10:01.398071 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:10:03.398051 kubelet[2695]: E0213 20:10:03.398004 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:10:03.446330 containerd[1480]: time="2025-02-13T20:10:03.446239387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:03.447582 containerd[1480]: time="2025-02-13T20:10:03.447490798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Feb 13 20:10:03.449381 containerd[1480]: time="2025-02-13T20:10:03.449060172Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:03.452267 containerd[1480]: time="2025-02-13T20:10:03.452200481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:03.453474 containerd[1480]: time="2025-02-13T20:10:03.453416612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.753502655s" Feb 13 20:10:03.453474 containerd[1480]: time="2025-02-13T20:10:03.453475012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 20:10:03.454834 containerd[1480]: time="2025-02-13T20:10:03.454801144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:10:03.472709 containerd[1480]: time="2025-02-13T20:10:03.472663664Z" level=info msg="CreateContainer within sandbox \"10ac8154aca2946e728416461820cc63143a5aef7de93a51751f32052b1dea42\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:10:03.491082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729662147.mount: Deactivated successfully. Feb 13 20:10:03.495430 containerd[1480]: time="2025-02-13T20:10:03.495232387Z" level=info msg="CreateContainer within sandbox \"10ac8154aca2946e728416461820cc63143a5aef7de93a51751f32052b1dea42\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"71608d7ae76ad6dcf1559ab3f2450dbc373ea0bcc28bb296ebdc8b4d46acf507\"" Feb 13 20:10:03.497031 containerd[1480]: time="2025-02-13T20:10:03.496964203Z" level=info msg="StartContainer for \"71608d7ae76ad6dcf1559ab3f2450dbc373ea0bcc28bb296ebdc8b4d46acf507\"" Feb 13 20:10:03.536825 systemd[1]: Started cri-containerd-71608d7ae76ad6dcf1559ab3f2450dbc373ea0bcc28bb296ebdc8b4d46acf507.scope - libcontainer container 71608d7ae76ad6dcf1559ab3f2450dbc373ea0bcc28bb296ebdc8b4d46acf507. Feb 13 20:10:03.579974 containerd[1480]: time="2025-02-13T20:10:03.579906348Z" level=info msg="StartContainer for \"71608d7ae76ad6dcf1559ab3f2450dbc373ea0bcc28bb296ebdc8b4d46acf507\" returns successfully" Feb 13 20:10:05.399158 kubelet[2695]: E0213 20:10:05.398520 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:10:05.513698 kubelet[2695]: I0213 20:10:05.513155 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:10:07.399638 kubelet[2695]: E0213 20:10:07.398513 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:10:08.735737 containerd[1480]: time="2025-02-13T20:10:08.735529344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:08.737412 containerd[1480]: time="2025-02-13T20:10:08.737139440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 20:10:08.739423 containerd[1480]: time="2025-02-13T20:10:08.739024498Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:08.743574 containerd[1480]: time="2025-02-13T20:10:08.743478301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:08.744526 containerd[1480]: time="2025-02-13T20:10:08.744328069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 5.288668677s" Feb 13 20:10:08.744526 containerd[1480]: time="2025-02-13T20:10:08.744375190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 20:10:08.748577 containerd[1480]: time="2025-02-13T20:10:08.748329268Z" level=info msg="CreateContainer within sandbox \"a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:10:08.766777 containerd[1480]: time="2025-02-13T20:10:08.766702006Z" level=info msg="CreateContainer within sandbox \"a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673\"" Feb 13 20:10:08.771106 containerd[1480]: time="2025-02-13T20:10:08.770217320Z" level=info msg="StartContainer for \"97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673\"" Feb 13 20:10:08.813697 systemd[1]: run-containerd-runc-k8s.io-97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673-runc.nwdXap.mount: Deactivated successfully. Feb 13 20:10:08.825919 systemd[1]: Started cri-containerd-97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673.scope - libcontainer container 97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673. Feb 13 20:10:08.867165 containerd[1480]: time="2025-02-13T20:10:08.866970577Z" level=info msg="StartContainer for \"97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673\" returns successfully" Feb 13 20:10:09.368845 containerd[1480]: time="2025-02-13T20:10:09.368788882Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:10:09.371888 systemd[1]: cri-containerd-97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673.scope: Deactivated successfully. Feb 13 20:10:09.399191 kubelet[2695]: E0213 20:10:09.399105 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:10:09.471163 kubelet[2695]: I0213 20:10:09.470965 2695 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 20:10:09.512002 kubelet[2695]: I0213 20:10:09.511828 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-776f9f5d-r2f55" podStartSLOduration=8.23358751 podStartE2EDuration="12.510879276s" podCreationTimestamp="2025-02-13 20:09:57 +0000 UTC" firstStartedPulling="2025-02-13 20:09:59.177308336 +0000 UTC m=+17.912069242" lastFinishedPulling="2025-02-13 20:10:03.454600102 +0000 UTC m=+22.189361008" observedRunningTime="2025-02-13 20:10:04.531049693 +0000 UTC m=+23.265810599" watchObservedRunningTime="2025-02-13 20:10:09.510879276 +0000 UTC m=+28.245640142" Feb 13 20:10:09.537992 systemd[1]: Created slice kubepods-burstable-podc052e617_ee7a_4d95_8541_47323a0ca995.slice - libcontainer container kubepods-burstable-podc052e617_ee7a_4d95_8541_47323a0ca995.slice. Feb 13 20:10:09.550071 systemd[1]: Created slice kubepods-besteffort-podec425091_4bab_4f95_b458_e02d7376e8e9.slice - libcontainer container kubepods-besteffort-podec425091_4bab_4f95_b458_e02d7376e8e9.slice. Feb 13 20:10:09.562491 systemd[1]: Created slice kubepods-besteffort-podd22c4ac8_b3bd_4eb9_80af_676921861f03.slice - libcontainer container kubepods-besteffort-podd22c4ac8_b3bd_4eb9_80af_676921861f03.slice. Feb 13 20:10:09.573059 containerd[1480]: time="2025-02-13T20:10:09.571824034Z" level=info msg="shim disconnected" id=97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673 namespace=k8s.io Feb 13 20:10:09.573059 containerd[1480]: time="2025-02-13T20:10:09.571881234Z" level=warning msg="cleaning up after shim disconnected" id=97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673 namespace=k8s.io Feb 13 20:10:09.573059 containerd[1480]: time="2025-02-13T20:10:09.571891514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:09.576096 systemd[1]: Created slice kubepods-besteffort-pod8ccd9614_bdb8_4f2a_8ecd_86b9a3d3d437.slice - libcontainer container kubepods-besteffort-pod8ccd9614_bdb8_4f2a_8ecd_86b9a3d3d437.slice. Feb 13 20:10:09.592802 systemd[1]: Created slice kubepods-burstable-podecd3ea8a_4017_49b0_914a_222a63032a3d.slice - libcontainer container kubepods-burstable-podecd3ea8a_4017_49b0_914a_222a63032a3d.slice. Feb 13 20:10:09.635651 kubelet[2695]: I0213 20:10:09.634933 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ssxn\" (UniqueName: \"kubernetes.io/projected/8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437-kube-api-access-5ssxn\") pod \"calico-kube-controllers-5d6457cb66-sszpn\" (UID: \"8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437\") " pod="calico-system/calico-kube-controllers-5d6457cb66-sszpn" Feb 13 20:10:09.635651 kubelet[2695]: I0213 20:10:09.634982 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxd7g\" (UniqueName: \"kubernetes.io/projected/d22c4ac8-b3bd-4eb9-80af-676921861f03-kube-api-access-hxd7g\") pod \"calico-apiserver-559fcc6975-znb7c\" (UID: \"d22c4ac8-b3bd-4eb9-80af-676921861f03\") " pod="calico-apiserver/calico-apiserver-559fcc6975-znb7c" Feb 13 20:10:09.635651 kubelet[2695]: I0213 20:10:09.635002 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec425091-4bab-4f95-b458-e02d7376e8e9-calico-apiserver-certs\") pod \"calico-apiserver-559fcc6975-jdpl4\" (UID: \"ec425091-4bab-4f95-b458-e02d7376e8e9\") " pod="calico-apiserver/calico-apiserver-559fcc6975-jdpl4" Feb 13 20:10:09.635651 kubelet[2695]: I0213 20:10:09.635019 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtfr\" (UniqueName: \"kubernetes.io/projected/ecd3ea8a-4017-49b0-914a-222a63032a3d-kube-api-access-bqtfr\") pod \"coredns-668d6bf9bc-zd9bb\" (UID: \"ecd3ea8a-4017-49b0-914a-222a63032a3d\") " pod="kube-system/coredns-668d6bf9bc-zd9bb" Feb 13 20:10:09.635651 kubelet[2695]: I0213 20:10:09.635051 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5txg\" (UniqueName: \"kubernetes.io/projected/c052e617-ee7a-4d95-8541-47323a0ca995-kube-api-access-q5txg\") pod \"coredns-668d6bf9bc-zppg6\" (UID: \"c052e617-ee7a-4d95-8541-47323a0ca995\") " pod="kube-system/coredns-668d6bf9bc-zppg6" Feb 13 20:10:09.635910 kubelet[2695]: I0213 20:10:09.635072 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecd3ea8a-4017-49b0-914a-222a63032a3d-config-volume\") pod \"coredns-668d6bf9bc-zd9bb\" (UID: \"ecd3ea8a-4017-49b0-914a-222a63032a3d\") " pod="kube-system/coredns-668d6bf9bc-zd9bb" Feb 13 20:10:09.635910 kubelet[2695]: I0213 20:10:09.635091 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qt92\" (UniqueName: \"kubernetes.io/projected/ec425091-4bab-4f95-b458-e02d7376e8e9-kube-api-access-9qt92\") pod \"calico-apiserver-559fcc6975-jdpl4\" (UID: \"ec425091-4bab-4f95-b458-e02d7376e8e9\") " pod="calico-apiserver/calico-apiserver-559fcc6975-jdpl4" Feb 13 20:10:09.635910 kubelet[2695]: I0213 20:10:09.635113 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d22c4ac8-b3bd-4eb9-80af-676921861f03-calico-apiserver-certs\") pod \"calico-apiserver-559fcc6975-znb7c\" (UID: \"d22c4ac8-b3bd-4eb9-80af-676921861f03\") " pod="calico-apiserver/calico-apiserver-559fcc6975-znb7c" Feb 13 20:10:09.635910 kubelet[2695]: I0213 20:10:09.635129 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437-tigera-ca-bundle\") pod \"calico-kube-controllers-5d6457cb66-sszpn\" (UID: \"8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437\") " pod="calico-system/calico-kube-controllers-5d6457cb66-sszpn" Feb 13 20:10:09.635910 kubelet[2695]: I0213 20:10:09.635159 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c052e617-ee7a-4d95-8541-47323a0ca995-config-volume\") pod \"coredns-668d6bf9bc-zppg6\" (UID: \"c052e617-ee7a-4d95-8541-47323a0ca995\") " pod="kube-system/coredns-668d6bf9bc-zppg6" Feb 13 20:10:09.775847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97a4b3e6137ec30b38328cb491cd6cd61e344125c6425556e4b9683356ef9673-rootfs.mount: Deactivated successfully. Feb 13 20:10:09.847006 containerd[1480]: time="2025-02-13T20:10:09.846944052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zppg6,Uid:c052e617-ee7a-4d95-8541-47323a0ca995,Namespace:kube-system,Attempt:0,}" Feb 13 20:10:09.857618 containerd[1480]: time="2025-02-13T20:10:09.857216353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559fcc6975-jdpl4,Uid:ec425091-4bab-4f95-b458-e02d7376e8e9,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:10:09.872760 containerd[1480]: time="2025-02-13T20:10:09.872140540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559fcc6975-znb7c,Uid:d22c4ac8-b3bd-4eb9-80af-676921861f03,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:10:09.887983 containerd[1480]: time="2025-02-13T20:10:09.886358639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6457cb66-sszpn,Uid:8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437,Namespace:calico-system,Attempt:0,}" Feb 13 20:10:09.898758 containerd[1480]: time="2025-02-13T20:10:09.898162435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zd9bb,Uid:ecd3ea8a-4017-49b0-914a-222a63032a3d,Namespace:kube-system,Attempt:0,}" Feb 13 20:10:10.028247 containerd[1480]: time="2025-02-13T20:10:10.028193474Z" level=error msg="Failed to destroy network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.031439 containerd[1480]: time="2025-02-13T20:10:10.031362585Z" level=error msg="encountered an error cleaning up failed sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.031677 containerd[1480]: time="2025-02-13T20:10:10.031468186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559fcc6975-jdpl4,Uid:ec425091-4bab-4f95-b458-e02d7376e8e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.031741 kubelet[2695]: E0213 20:10:10.031683 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.031784 kubelet[2695]: E0213 20:10:10.031768 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559fcc6975-jdpl4" Feb 13 20:10:10.031807 kubelet[2695]: E0213 20:10:10.031788 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559fcc6975-jdpl4" Feb 13 20:10:10.032006 kubelet[2695]: E0213 20:10:10.031829 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-559fcc6975-jdpl4_calico-apiserver(ec425091-4bab-4f95-b458-e02d7376e8e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-559fcc6975-jdpl4_calico-apiserver(ec425091-4bab-4f95-b458-e02d7376e8e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559fcc6975-jdpl4" podUID="ec425091-4bab-4f95-b458-e02d7376e8e9" Feb 13 20:10:10.058743 containerd[1480]: time="2025-02-13T20:10:10.058683696Z" level=error msg="Failed to destroy network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.059262 containerd[1480]: time="2025-02-13T20:10:10.059012900Z" level=error msg="encountered an error cleaning up failed sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.059262 containerd[1480]: time="2025-02-13T20:10:10.059074940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zppg6,Uid:c052e617-ee7a-4d95-8541-47323a0ca995,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.059347 kubelet[2695]: E0213 20:10:10.059297 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.059396 kubelet[2695]: E0213 20:10:10.059361 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zppg6" Feb 13 20:10:10.059396 kubelet[2695]: E0213 20:10:10.059381 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zppg6" Feb 13 20:10:10.061183 kubelet[2695]: E0213 20:10:10.059493 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zppg6_kube-system(c052e617-ee7a-4d95-8541-47323a0ca995)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zppg6_kube-system(c052e617-ee7a-4d95-8541-47323a0ca995)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zppg6" podUID="c052e617-ee7a-4d95-8541-47323a0ca995" Feb 13 20:10:10.062373 containerd[1480]: time="2025-02-13T20:10:10.062326733Z" level=error msg="Failed to destroy network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.064156 containerd[1480]: time="2025-02-13T20:10:10.064102630Z" level=error msg="encountered an error cleaning up failed sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.064266 containerd[1480]: time="2025-02-13T20:10:10.064183911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zd9bb,Uid:ecd3ea8a-4017-49b0-914a-222a63032a3d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.064485 kubelet[2695]: E0213 20:10:10.064448 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.064608 kubelet[2695]: E0213 20:10:10.064504 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zd9bb" Feb 13 20:10:10.064608 kubelet[2695]: E0213 20:10:10.064522 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zd9bb" Feb 13 20:10:10.064608 kubelet[2695]: E0213 20:10:10.064584 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zd9bb_kube-system(ecd3ea8a-4017-49b0-914a-222a63032a3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zd9bb_kube-system(ecd3ea8a-4017-49b0-914a-222a63032a3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zd9bb" podUID="ecd3ea8a-4017-49b0-914a-222a63032a3d" Feb 13 20:10:10.087790 containerd[1480]: time="2025-02-13T20:10:10.087584103Z" level=error msg="Failed to destroy network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.087790 containerd[1480]: time="2025-02-13T20:10:10.087660464Z" level=error msg="Failed to destroy network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.089807 containerd[1480]: time="2025-02-13T20:10:10.089738245Z" level=error msg="encountered an error cleaning up failed sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.089901 containerd[1480]: time="2025-02-13T20:10:10.089826286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559fcc6975-znb7c,Uid:d22c4ac8-b3bd-4eb9-80af-676921861f03,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.090157 containerd[1480]: time="2025-02-13T20:10:10.090090368Z" level=error msg="encountered an error cleaning up failed sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.090299 containerd[1480]: time="2025-02-13T20:10:10.090151649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6457cb66-sszpn,Uid:8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.091805 kubelet[2695]: E0213 20:10:10.090476 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.091805 kubelet[2695]: E0213 20:10:10.090584 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559fcc6975-znb7c" Feb 13 20:10:10.091805 kubelet[2695]: E0213 20:10:10.090611 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559fcc6975-znb7c" Feb 13 20:10:10.091999 kubelet[2695]: E0213 20:10:10.090666 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-559fcc6975-znb7c_calico-apiserver(d22c4ac8-b3bd-4eb9-80af-676921861f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-559fcc6975-znb7c_calico-apiserver(d22c4ac8-b3bd-4eb9-80af-676921861f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559fcc6975-znb7c" podUID="d22c4ac8-b3bd-4eb9-80af-676921861f03" Feb 13 20:10:10.091999 kubelet[2695]: E0213 20:10:10.091651 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.091999 kubelet[2695]: E0213 20:10:10.091725 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d6457cb66-sszpn" Feb 13 20:10:10.092115 kubelet[2695]: E0213 20:10:10.091743 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d6457cb66-sszpn" Feb 13 20:10:10.092115 kubelet[2695]: E0213 20:10:10.091803 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d6457cb66-sszpn_calico-system(8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d6457cb66-sszpn_calico-system(8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d6457cb66-sszpn" podUID="8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437" Feb 13 20:10:10.546830 kubelet[2695]: I0213 20:10:10.545926 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:10.548354 containerd[1480]: time="2025-02-13T20:10:10.547794794Z" level=info msg="StopPodSandbox for \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\"" Feb 13 20:10:10.548354 containerd[1480]: time="2025-02-13T20:10:10.548049317Z" level=info msg="Ensure that sandbox cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63 in task-service has been cleanup successfully" Feb 13 20:10:10.556339 containerd[1480]: time="2025-02-13T20:10:10.556300519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:10:10.556591 kubelet[2695]: I0213 20:10:10.556504 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:10.560803 containerd[1480]: time="2025-02-13T20:10:10.559736393Z" level=info msg="StopPodSandbox for \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\"" Feb 13 20:10:10.560803 containerd[1480]: time="2025-02-13T20:10:10.559982635Z" level=info msg="Ensure that sandbox f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931 in task-service has been cleanup successfully" Feb 13 20:10:10.560966 kubelet[2695]: I0213 20:10:10.560311 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:10.561222 containerd[1480]: time="2025-02-13T20:10:10.561193367Z" level=info msg="StopPodSandbox for \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\"" Feb 13 20:10:10.564324 containerd[1480]: time="2025-02-13T20:10:10.563757473Z" level=info msg="Ensure that sandbox 2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1 in task-service has been cleanup successfully" Feb 13 20:10:10.572204 kubelet[2695]: I0213 20:10:10.570840 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:10.572928 containerd[1480]: time="2025-02-13T20:10:10.572399398Z" level=info msg="StopPodSandbox for \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\"" Feb 13 20:10:10.572928 containerd[1480]: time="2025-02-13T20:10:10.572850283Z" level=info msg="Ensure that sandbox fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf in task-service has been cleanup successfully" Feb 13 20:10:10.579123 kubelet[2695]: I0213 20:10:10.579076 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:10.580613 containerd[1480]: time="2025-02-13T20:10:10.580469559Z" level=info msg="StopPodSandbox for \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\"" Feb 13 20:10:10.581293 containerd[1480]: time="2025-02-13T20:10:10.580678481Z" level=info msg="Ensure that sandbox 14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee in task-service has been cleanup successfully" Feb 13 20:10:10.650142 containerd[1480]: time="2025-02-13T20:10:10.650048770Z" level=error msg="StopPodSandbox for \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\" failed" error="failed to destroy network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.664244 kubelet[2695]: E0213 20:10:10.661886 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:10.664244 kubelet[2695]: E0213 20:10:10.661960 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1"} Feb 13 20:10:10.664244 kubelet[2695]: E0213 20:10:10.662036 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:10:10.664244 kubelet[2695]: E0213 20:10:10.662058 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d6457cb66-sszpn" podUID="8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437" Feb 13 20:10:10.666635 containerd[1480]: time="2025-02-13T20:10:10.666578534Z" level=error msg="StopPodSandbox for \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\" failed" error="failed to destroy network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.667792 kubelet[2695]: E0213 20:10:10.667520 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:10.667990 kubelet[2695]: E0213 20:10:10.667955 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf"} Feb 13 20:10:10.668122 kubelet[2695]: E0213 20:10:10.668105 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec425091-4bab-4f95-b458-e02d7376e8e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:10:10.668698 kubelet[2695]: E0213 20:10:10.668556 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec425091-4bab-4f95-b458-e02d7376e8e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559fcc6975-jdpl4" podUID="ec425091-4bab-4f95-b458-e02d7376e8e9" Feb 13 20:10:10.678394 containerd[1480]: time="2025-02-13T20:10:10.677733765Z" level=error msg="StopPodSandbox for \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\" failed" error="failed to destroy network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.678582 kubelet[2695]: E0213 20:10:10.678229 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:10.678582 kubelet[2695]: E0213 20:10:10.678287 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63"} Feb 13 20:10:10.678582 kubelet[2695]: E0213 20:10:10.678321 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d22c4ac8-b3bd-4eb9-80af-676921861f03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:10:10.678582 kubelet[2695]: E0213 20:10:10.678343 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d22c4ac8-b3bd-4eb9-80af-676921861f03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559fcc6975-znb7c" podUID="d22c4ac8-b3bd-4eb9-80af-676921861f03" Feb 13 20:10:10.684490 containerd[1480]: time="2025-02-13T20:10:10.683199139Z" level=error msg="StopPodSandbox for \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\" failed" error="failed to destroy network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.684677 kubelet[2695]: E0213 20:10:10.683475 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:10.684677 kubelet[2695]: E0213 20:10:10.683528 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee"} Feb 13 20:10:10.684677 kubelet[2695]: E0213 20:10:10.683587 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c052e617-ee7a-4d95-8541-47323a0ca995\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:10:10.684677 kubelet[2695]: E0213 20:10:10.683613 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c052e617-ee7a-4d95-8541-47323a0ca995\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zppg6" podUID="c052e617-ee7a-4d95-8541-47323a0ca995" Feb 13 20:10:10.688057 containerd[1480]: time="2025-02-13T20:10:10.687992426Z" level=error msg="StopPodSandbox for \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\" failed" error="failed to destroy network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:10.688469 kubelet[2695]: E0213 20:10:10.688388 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:10.688559 kubelet[2695]: E0213 20:10:10.688487 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931"} Feb 13 20:10:10.688705 kubelet[2695]: E0213 20:10:10.688561 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ecd3ea8a-4017-49b0-914a-222a63032a3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:10:10.688705 kubelet[2695]: E0213 20:10:10.688596 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ecd3ea8a-4017-49b0-914a-222a63032a3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zd9bb" podUID="ecd3ea8a-4017-49b0-914a-222a63032a3d" Feb 13 20:10:10.766114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee-shm.mount: Deactivated successfully. Feb 13 20:10:11.408603 systemd[1]: Created slice kubepods-besteffort-pode49c3cb5_faf1_41f3_bccd_16c39f19a201.slice - libcontainer container kubepods-besteffort-pode49c3cb5_faf1_41f3_bccd_16c39f19a201.slice. Feb 13 20:10:11.412143 containerd[1480]: time="2025-02-13T20:10:11.411595782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfbrs,Uid:e49c3cb5-faf1-41f3-bccd-16c39f19a201,Namespace:calico-system,Attempt:0,}" Feb 13 20:10:11.496598 containerd[1480]: time="2025-02-13T20:10:11.496513395Z" level=error msg="Failed to destroy network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:11.498164 containerd[1480]: time="2025-02-13T20:10:11.498085611Z" level=error msg="encountered an error cleaning up failed sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:11.498343 containerd[1480]: time="2025-02-13T20:10:11.498236772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfbrs,Uid:e49c3cb5-faf1-41f3-bccd-16c39f19a201,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:11.500201 kubelet[2695]: E0213 20:10:11.499881 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:11.500201 kubelet[2695]: E0213 20:10:11.500059 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dfbrs" Feb 13 20:10:11.500821 kubelet[2695]: E0213 20:10:11.500532 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dfbrs" Feb 13 20:10:11.502265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc-shm.mount: Deactivated successfully. Feb 13 20:10:11.503640 kubelet[2695]: E0213 20:10:11.501513 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dfbrs_calico-system(e49c3cb5-faf1-41f3-bccd-16c39f19a201)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dfbrs_calico-system(e49c3cb5-faf1-41f3-bccd-16c39f19a201)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:10:11.583570 kubelet[2695]: I0213 20:10:11.583489 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:11.585650 containerd[1480]: time="2025-02-13T20:10:11.584975284Z" level=info msg="StopPodSandbox for \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\"" Feb 13 20:10:11.587970 containerd[1480]: time="2025-02-13T20:10:11.587508150Z" level=info msg="Ensure that sandbox d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc in task-service has been cleanup successfully" Feb 13 20:10:11.620667 containerd[1480]: time="2025-02-13T20:10:11.620609402Z" level=error msg="StopPodSandbox for \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\" failed" error="failed to destroy network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:11.621027 kubelet[2695]: E0213 20:10:11.620814 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:11.621027 kubelet[2695]: E0213 20:10:11.620860 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc"} Feb 13 20:10:11.621027 kubelet[2695]: E0213 20:10:11.620892 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e49c3cb5-faf1-41f3-bccd-16c39f19a201\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:10:11.621027 kubelet[2695]: E0213 20:10:11.620915 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e49c3cb5-faf1-41f3-bccd-16c39f19a201\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dfbrs" podUID="e49c3cb5-faf1-41f3-bccd-16c39f19a201" Feb 13 20:10:17.863207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417334735.mount: Deactivated successfully. Feb 13 20:10:17.905817 containerd[1480]: time="2025-02-13T20:10:17.905660042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.908566 containerd[1480]: time="2025-02-13T20:10:17.907439461Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.908566 containerd[1480]: time="2025-02-13T20:10:17.907580783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 20:10:17.910114 containerd[1480]: time="2025-02-13T20:10:17.910039169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.912848 containerd[1480]: time="2025-02-13T20:10:17.912283073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.355589031s" Feb 13 20:10:17.912848 containerd[1480]: time="2025-02-13T20:10:17.912357714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 20:10:17.929217 containerd[1480]: time="2025-02-13T20:10:17.929168694Z" level=info msg="CreateContainer within sandbox \"a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:10:17.948298 containerd[1480]: time="2025-02-13T20:10:17.947620091Z" level=info msg="CreateContainer within sandbox \"a58b941953c6005d059db410a8b2ea1b15c4dad001ee352b85cba28b27b7de25\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a60b54125c27dc87e866fccd6c07684983ab83e5708ec0e24b63eaf4e1e42d44\"" Feb 13 20:10:17.949701 containerd[1480]: time="2025-02-13T20:10:17.949658153Z" level=info msg="StartContainer for \"a60b54125c27dc87e866fccd6c07684983ab83e5708ec0e24b63eaf4e1e42d44\"" Feb 13 20:10:17.986803 systemd[1]: Started cri-containerd-a60b54125c27dc87e866fccd6c07684983ab83e5708ec0e24b63eaf4e1e42d44.scope - libcontainer container a60b54125c27dc87e866fccd6c07684983ab83e5708ec0e24b63eaf4e1e42d44. Feb 13 20:10:18.029361 containerd[1480]: time="2025-02-13T20:10:18.029312167Z" level=info msg="StartContainer for \"a60b54125c27dc87e866fccd6c07684983ab83e5708ec0e24b63eaf4e1e42d44\" returns successfully" Feb 13 20:10:18.147930 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:10:18.148094 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:10:18.640934 kubelet[2695]: I0213 20:10:18.640080 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8gmvh" podStartSLOduration=2.611306701 podStartE2EDuration="21.640063353s" podCreationTimestamp="2025-02-13 20:09:57 +0000 UTC" firstStartedPulling="2025-02-13 20:09:58.885380601 +0000 UTC m=+17.620141507" lastFinishedPulling="2025-02-13 20:10:17.914137253 +0000 UTC m=+36.648898159" observedRunningTime="2025-02-13 20:10:18.638080972 +0000 UTC m=+37.372841878" watchObservedRunningTime="2025-02-13 20:10:18.640063353 +0000 UTC m=+37.374824219" Feb 13 20:10:19.649393 systemd[1]: run-containerd-runc-k8s.io-a60b54125c27dc87e866fccd6c07684983ab83e5708ec0e24b63eaf4e1e42d44-runc.8BDvqL.mount: Deactivated successfully. Feb 13 20:10:21.402432 containerd[1480]: time="2025-02-13T20:10:21.401812117Z" level=info msg="StopPodSandbox for \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\"" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.483 [INFO][4031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.487 [INFO][4031] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" iface="eth0" netns="/var/run/netns/cni-4ad39348-1051-a59e-2920-c6473ea517d3" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.489 [INFO][4031] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" iface="eth0" netns="/var/run/netns/cni-4ad39348-1051-a59e-2920-c6473ea517d3" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.489 [INFO][4031] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" iface="eth0" netns="/var/run/netns/cni-4ad39348-1051-a59e-2920-c6473ea517d3" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.489 [INFO][4031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.489 [INFO][4031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.535 [INFO][4037] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.535 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.535 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.551 [WARNING][4037] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.551 [INFO][4037] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.554 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:21.558000 containerd[1480]: 2025-02-13 20:10:21.556 [INFO][4031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:21.558510 containerd[1480]: time="2025-02-13T20:10:21.558319726Z" level=info msg="TearDown network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\" successfully" Feb 13 20:10:21.558510 containerd[1480]: time="2025-02-13T20:10:21.558351687Z" level=info msg="StopPodSandbox for \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\" returns successfully" Feb 13 20:10:21.562820 systemd[1]: run-netns-cni\x2d4ad39348\x2d1051\x2da59e\x2d2920\x2dc6473ea517d3.mount: Deactivated successfully. Feb 13 20:10:21.569150 containerd[1480]: time="2025-02-13T20:10:21.567994513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559fcc6975-jdpl4,Uid:ec425091-4bab-4f95-b458-e02d7376e8e9,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:10:21.752251 systemd-networkd[1370]: cali0ab9f708336: Link UP Feb 13 20:10:21.753181 systemd-networkd[1370]: cali0ab9f708336: Gained carrier Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.625 [INFO][4045] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.644 [INFO][4045] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0 calico-apiserver-559fcc6975- calico-apiserver ec425091-4bab-4f95-b458-e02d7376e8e9 750 0 2025-02-13 20:09:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:559fcc6975 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-1-c-c4549fc0d2 calico-apiserver-559fcc6975-jdpl4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ab9f708336 [] []}} ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-jdpl4" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.644 [INFO][4045] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-jdpl4" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.682 [INFO][4058] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" HandleID="k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.699 [INFO][4058] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" HandleID="k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-1-c-c4549fc0d2", "pod":"calico-apiserver-559fcc6975-jdpl4", "timestamp":"2025-02-13 20:10:21.682199335 +0000 UTC"}, Hostname:"ci-4081-3-1-c-c4549fc0d2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.699 [INFO][4058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.699 [INFO][4058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.699 [INFO][4058] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-c-c4549fc0d2' Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.703 [INFO][4058] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.710 [INFO][4058] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.718 [INFO][4058] ipam/ipam.go 489: Trying affinity for 192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.720 [INFO][4058] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.724 [INFO][4058] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.724 [INFO][4058] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.727 [INFO][4058] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9 Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.733 [INFO][4058] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.741 [INFO][4058] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.129/26] block=192.168.17.128/26 handle="k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.741 [INFO][4058] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.129/26] handle="k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.741 [INFO][4058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:21.772517 containerd[1480]: 2025-02-13 20:10:21.741 [INFO][4058] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.129/26] IPv6=[] ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" HandleID="k8s-pod-network.09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.773870 containerd[1480]: 2025-02-13 20:10:21.744 [INFO][4045] cni-plugin/k8s.go 386: Populated endpoint ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-jdpl4" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0", GenerateName:"calico-apiserver-559fcc6975-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec425091-4bab-4f95-b458-e02d7376e8e9", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559fcc6975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"", Pod:"calico-apiserver-559fcc6975-jdpl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ab9f708336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:21.773870 containerd[1480]: 2025-02-13 20:10:21.745 [INFO][4045] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.129/32] ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-jdpl4" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.773870 containerd[1480]: 2025-02-13 20:10:21.745 [INFO][4045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ab9f708336 ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-jdpl4" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.773870 containerd[1480]: 2025-02-13 20:10:21.753 [INFO][4045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-jdpl4" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.773870 containerd[1480]: 2025-02-13 20:10:21.753 [INFO][4045] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-jdpl4" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0", GenerateName:"calico-apiserver-559fcc6975-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec425091-4bab-4f95-b458-e02d7376e8e9", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559fcc6975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9", Pod:"calico-apiserver-559fcc6975-jdpl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ab9f708336", MAC:"d2:6b:fb:7c:79:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:21.773870 containerd[1480]: 2025-02-13 20:10:21.769 [INFO][4045] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-jdpl4" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:21.800580 containerd[1480]: time="2025-02-13T20:10:21.799825315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:21.800580 containerd[1480]: time="2025-02-13T20:10:21.799897516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:21.800580 containerd[1480]: time="2025-02-13T20:10:21.799914036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:21.800580 containerd[1480]: time="2025-02-13T20:10:21.800037998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:21.826906 systemd[1]: Started cri-containerd-09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9.scope - libcontainer container 09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9. Feb 13 20:10:21.877008 containerd[1480]: time="2025-02-13T20:10:21.876768206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559fcc6975-jdpl4,Uid:ec425091-4bab-4f95-b458-e02d7376e8e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9\"" Feb 13 20:10:21.882563 containerd[1480]: time="2025-02-13T20:10:21.882493429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:10:22.399991 kubelet[2695]: I0213 20:10:22.399090 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:10:22.400426 containerd[1480]: time="2025-02-13T20:10:22.400395426Z" level=info msg="StopPodSandbox for \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\"" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.497 [INFO][4148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.497 [INFO][4148] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" iface="eth0" netns="/var/run/netns/cni-fce0e32b-a37a-6cd6-7617-cb706e4dd310" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.498 [INFO][4148] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" iface="eth0" netns="/var/run/netns/cni-fce0e32b-a37a-6cd6-7617-cb706e4dd310" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.498 [INFO][4148] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" iface="eth0" netns="/var/run/netns/cni-fce0e32b-a37a-6cd6-7617-cb706e4dd310" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.498 [INFO][4148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.498 [INFO][4148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.522 [INFO][4156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.523 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.523 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.538 [WARNING][4156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.539 [INFO][4156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.543 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:22.546658 containerd[1480]: 2025-02-13 20:10:22.544 [INFO][4148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:22.549064 containerd[1480]: time="2025-02-13T20:10:22.546895218Z" level=info msg="TearDown network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\" successfully" Feb 13 20:10:22.549064 containerd[1480]: time="2025-02-13T20:10:22.546941778Z" level=info msg="StopPodSandbox for \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\" returns successfully" Feb 13 20:10:22.549064 containerd[1480]: time="2025-02-13T20:10:22.548122071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zppg6,Uid:c052e617-ee7a-4d95-8541-47323a0ca995,Namespace:kube-system,Attempt:1,}" Feb 13 20:10:22.552721 systemd[1]: run-netns-cni\x2dfce0e32b\x2da37a\x2d6cd6\x2d7617\x2dcb706e4dd310.mount: Deactivated successfully. Feb 13 20:10:22.830265 systemd-networkd[1370]: cali66063bd2ff7: Link UP Feb 13 20:10:22.832621 systemd-networkd[1370]: cali66063bd2ff7: Gained carrier Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.602 [INFO][4167] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.627 [INFO][4167] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0 coredns-668d6bf9bc- kube-system c052e617-ee7a-4d95-8541-47323a0ca995 765 0 2025-02-13 20:09:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-1-c-c4549fc0d2 coredns-668d6bf9bc-zppg6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66063bd2ff7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zppg6" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.627 [INFO][4167] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zppg6" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.668 [INFO][4176] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" HandleID="k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.684 [INFO][4176] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" HandleID="k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cd70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-1-c-c4549fc0d2", "pod":"coredns-668d6bf9bc-zppg6", "timestamp":"2025-02-13 20:10:22.668088687 +0000 UTC"}, Hostname:"ci-4081-3-1-c-c4549fc0d2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.684 [INFO][4176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.685 [INFO][4176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.685 [INFO][4176] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-c-c4549fc0d2' Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.688 [INFO][4176] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.783 [INFO][4176] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.792 [INFO][4176] ipam/ipam.go 489: Trying affinity for 192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.795 [INFO][4176] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.801 [INFO][4176] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.801 [INFO][4176] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.804 [INFO][4176] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.812 [INFO][4176] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.822 [INFO][4176] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.130/26] block=192.168.17.128/26 handle="k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.822 [INFO][4176] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.130/26] handle="k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.822 [INFO][4176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:22.857632 containerd[1480]: 2025-02-13 20:10:22.822 [INFO][4176] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.130/26] IPv6=[] ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" HandleID="k8s-pod-network.48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.858503 containerd[1480]: 2025-02-13 20:10:22.824 [INFO][4167] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zppg6" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c052e617-ee7a-4d95-8541-47323a0ca995", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"", Pod:"coredns-668d6bf9bc-zppg6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66063bd2ff7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:22.858503 containerd[1480]: 2025-02-13 20:10:22.825 [INFO][4167] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.130/32] ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zppg6" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.858503 containerd[1480]: 2025-02-13 20:10:22.825 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66063bd2ff7 ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zppg6" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.858503 containerd[1480]: 2025-02-13 20:10:22.832 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zppg6" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.858503 containerd[1480]: 2025-02-13 20:10:22.832 [INFO][4167] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zppg6" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c052e617-ee7a-4d95-8541-47323a0ca995", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e", Pod:"coredns-668d6bf9bc-zppg6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66063bd2ff7", MAC:"3a:f6:f0:15:97:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:22.858503 containerd[1480]: 2025-02-13 20:10:22.853 [INFO][4167] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e" Namespace="kube-system" Pod="coredns-668d6bf9bc-zppg6" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:22.881579 containerd[1480]: time="2025-02-13T20:10:22.880849096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:22.881579 containerd[1480]: time="2025-02-13T20:10:22.880916977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:22.881579 containerd[1480]: time="2025-02-13T20:10:22.880937937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:22.881579 containerd[1480]: time="2025-02-13T20:10:22.881041499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:22.912950 systemd[1]: Started cri-containerd-48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e.scope - libcontainer container 48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e. Feb 13 20:10:22.939292 systemd-networkd[1370]: cali0ab9f708336: Gained IPv6LL Feb 13 20:10:22.956950 containerd[1480]: time="2025-02-13T20:10:22.956881463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zppg6,Uid:c052e617-ee7a-4d95-8541-47323a0ca995,Namespace:kube-system,Attempt:1,} returns sandbox id \"48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e\"" Feb 13 20:10:22.961916 containerd[1480]: time="2025-02-13T20:10:22.961842678Z" level=info msg="CreateContainer within sandbox \"48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:10:22.990758 containerd[1480]: time="2025-02-13T20:10:22.990671399Z" level=info msg="CreateContainer within sandbox \"48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e354f50ffcccba68046c266104d69b02b0de9459cd03c7be4f2120ec0212b4ab\"" Feb 13 20:10:22.991224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521225747.mount: Deactivated successfully. Feb 13 20:10:22.993420 containerd[1480]: time="2025-02-13T20:10:22.992871064Z" level=info msg="StartContainer for \"e354f50ffcccba68046c266104d69b02b0de9459cd03c7be4f2120ec0212b4ab\"" Feb 13 20:10:23.040078 systemd[1]: Started cri-containerd-e354f50ffcccba68046c266104d69b02b0de9459cd03c7be4f2120ec0212b4ab.scope - libcontainer container e354f50ffcccba68046c266104d69b02b0de9459cd03c7be4f2120ec0212b4ab. Feb 13 20:10:23.103742 containerd[1480]: time="2025-02-13T20:10:23.103615185Z" level=info msg="StartContainer for \"e354f50ffcccba68046c266104d69b02b0de9459cd03c7be4f2120ec0212b4ab\" returns successfully" Feb 13 20:10:23.382587 kernel: bpftool[4306]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:10:23.401401 containerd[1480]: time="2025-02-13T20:10:23.400836639Z" level=info msg="StopPodSandbox for \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\"" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.485 [INFO][4329] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.485 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" iface="eth0" netns="/var/run/netns/cni-20da5b2f-9318-6bd1-fc14-c68d655e486c" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.486 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" iface="eth0" netns="/var/run/netns/cni-20da5b2f-9318-6bd1-fc14-c68d655e486c" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.486 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" iface="eth0" netns="/var/run/netns/cni-20da5b2f-9318-6bd1-fc14-c68d655e486c" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.486 [INFO][4329] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.486 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.517 [INFO][4345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.518 [INFO][4345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.518 [INFO][4345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.529 [WARNING][4345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.530 [INFO][4345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.532 [INFO][4345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:23.538905 containerd[1480]: 2025-02-13 20:10:23.535 [INFO][4329] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:23.539707 containerd[1480]: time="2025-02-13T20:10:23.539627396Z" level=info msg="TearDown network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\" successfully" Feb 13 20:10:23.539707 containerd[1480]: time="2025-02-13T20:10:23.539678997Z" level=info msg="StopPodSandbox for \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\" returns successfully" Feb 13 20:10:23.541423 containerd[1480]: time="2025-02-13T20:10:23.541227694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zd9bb,Uid:ecd3ea8a-4017-49b0-914a-222a63032a3d,Namespace:kube-system,Attempt:1,}" Feb 13 20:10:23.664282 kubelet[2695]: I0213 20:10:23.663917 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zppg6" podStartSLOduration=36.66389667 podStartE2EDuration="36.66389667s" podCreationTimestamp="2025-02-13 20:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:23.663289303 +0000 UTC m=+42.398050209" watchObservedRunningTime="2025-02-13 20:10:23.66389667 +0000 UTC m=+42.398657576" Feb 13 20:10:23.727801 systemd-networkd[1370]: vxlan.calico: Link UP Feb 13 20:10:23.727808 systemd-networkd[1370]: vxlan.calico: Gained carrier Feb 13 20:10:23.793196 systemd-networkd[1370]: cali8146fb1f975: Link UP Feb 13 20:10:23.794185 systemd-networkd[1370]: cali8146fb1f975: Gained carrier Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.634 [INFO][4353] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0 coredns-668d6bf9bc- kube-system ecd3ea8a-4017-49b0-914a-222a63032a3d 774 0 2025-02-13 20:09:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-1-c-c4549fc0d2 coredns-668d6bf9bc-zd9bb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8146fb1f975 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zd9bb" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.635 [INFO][4353] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zd9bb" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.686 [INFO][4378] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" HandleID="k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.710 [INFO][4378] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" HandleID="k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002616d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-1-c-c4549fc0d2", "pod":"coredns-668d6bf9bc-zd9bb", "timestamp":"2025-02-13 20:10:23.686927808 +0000 UTC"}, Hostname:"ci-4081-3-1-c-c4549fc0d2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.710 [INFO][4378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.710 [INFO][4378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.710 [INFO][4378] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-c-c4549fc0d2' Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.714 [INFO][4378] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.732 [INFO][4378] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.746 [INFO][4378] ipam/ipam.go 489: Trying affinity for 192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.755 [INFO][4378] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.758 [INFO][4378] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.759 [INFO][4378] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.761 [INFO][4378] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209 Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.766 [INFO][4378] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.779 [INFO][4378] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.131/26] block=192.168.17.128/26 handle="k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.780 [INFO][4378] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.131/26] handle="k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.780 [INFO][4378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:23.818749 containerd[1480]: 2025-02-13 20:10:23.780 [INFO][4378] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.131/26] IPv6=[] ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" HandleID="k8s-pod-network.30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.820921 containerd[1480]: 2025-02-13 20:10:23.785 [INFO][4353] cni-plugin/k8s.go 386: Populated endpoint ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zd9bb" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ecd3ea8a-4017-49b0-914a-222a63032a3d", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"", Pod:"coredns-668d6bf9bc-zd9bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8146fb1f975", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:23.820921 containerd[1480]: 2025-02-13 20:10:23.785 [INFO][4353] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.131/32] ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zd9bb" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.820921 containerd[1480]: 2025-02-13 20:10:23.785 [INFO][4353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8146fb1f975 ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zd9bb" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.820921 containerd[1480]: 2025-02-13 20:10:23.792 [INFO][4353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zd9bb" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.820921 containerd[1480]: 2025-02-13 20:10:23.794 [INFO][4353] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zd9bb" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ecd3ea8a-4017-49b0-914a-222a63032a3d", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209", Pod:"coredns-668d6bf9bc-zd9bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8146fb1f975", MAC:"02:2e:27:ea:ca:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:23.820921 containerd[1480]: 2025-02-13 20:10:23.813 [INFO][4353] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209" Namespace="kube-system" Pod="coredns-668d6bf9bc-zd9bb" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:23.848144 containerd[1480]: time="2025-02-13T20:10:23.847821813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:23.848144 containerd[1480]: time="2025-02-13T20:10:23.847893414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:23.848144 containerd[1480]: time="2025-02-13T20:10:23.847911814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:23.848144 containerd[1480]: time="2025-02-13T20:10:23.848029815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:23.873755 systemd[1]: Started cri-containerd-30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209.scope - libcontainer container 30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209. Feb 13 20:10:23.891554 systemd[1]: run-netns-cni\x2d20da5b2f\x2d9318\x2d6bd1\x2dfc14\x2dc68d655e486c.mount: Deactivated successfully. Feb 13 20:10:23.924828 containerd[1480]: time="2025-02-13T20:10:23.923986427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zd9bb,Uid:ecd3ea8a-4017-49b0-914a-222a63032a3d,Namespace:kube-system,Attempt:1,} returns sandbox id \"30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209\"" Feb 13 20:10:23.931626 containerd[1480]: time="2025-02-13T20:10:23.930828824Z" level=info msg="CreateContainer within sandbox \"30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:10:23.951527 containerd[1480]: time="2025-02-13T20:10:23.951457296Z" level=info msg="CreateContainer within sandbox \"30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c0af03a2d3bd5f9c8c76d50c278ebaa711d08bf3ad428f92f5f1ffc73c6ef22\"" Feb 13 20:10:23.953982 containerd[1480]: time="2025-02-13T20:10:23.953946803Z" level=info msg="StartContainer for \"2c0af03a2d3bd5f9c8c76d50c278ebaa711d08bf3ad428f92f5f1ffc73c6ef22\"" Feb 13 20:10:24.010963 systemd[1]: Started cri-containerd-2c0af03a2d3bd5f9c8c76d50c278ebaa711d08bf3ad428f92f5f1ffc73c6ef22.scope - libcontainer container 2c0af03a2d3bd5f9c8c76d50c278ebaa711d08bf3ad428f92f5f1ffc73c6ef22. Feb 13 20:10:24.050689 containerd[1480]: time="2025-02-13T20:10:24.050641812Z" level=info msg="StartContainer for \"2c0af03a2d3bd5f9c8c76d50c278ebaa711d08bf3ad428f92f5f1ffc73c6ef22\" returns successfully" Feb 13 20:10:24.089691 systemd-networkd[1370]: cali66063bd2ff7: Gained IPv6LL Feb 13 20:10:24.399149 containerd[1480]: time="2025-02-13T20:10:24.399103828Z" level=info msg="StopPodSandbox for \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\"" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.463 [INFO][4549] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.463 [INFO][4549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" iface="eth0" netns="/var/run/netns/cni-f7b51cbc-0d5f-1c6d-7040-6aa3ea796862" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.464 [INFO][4549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" iface="eth0" netns="/var/run/netns/cni-f7b51cbc-0d5f-1c6d-7040-6aa3ea796862" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.464 [INFO][4549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" iface="eth0" netns="/var/run/netns/cni-f7b51cbc-0d5f-1c6d-7040-6aa3ea796862" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.464 [INFO][4549] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.464 [INFO][4549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.488 [INFO][4555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.488 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.488 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.498 [WARNING][4555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.499 [INFO][4555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.501 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:24.504602 containerd[1480]: 2025-02-13 20:10:24.503 [INFO][4549] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:24.505125 containerd[1480]: time="2025-02-13T20:10:24.505088785Z" level=info msg="TearDown network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\" successfully" Feb 13 20:10:24.505125 containerd[1480]: time="2025-02-13T20:10:24.505125426Z" level=info msg="StopPodSandbox for \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\" returns successfully" Feb 13 20:10:24.506137 containerd[1480]: time="2025-02-13T20:10:24.506107157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6457cb66-sszpn,Uid:8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437,Namespace:calico-system,Attempt:1,}" Feb 13 20:10:24.701764 kubelet[2695]: I0213 20:10:24.699839 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zd9bb" podStartSLOduration=37.699818825 podStartE2EDuration="37.699818825s" podCreationTimestamp="2025-02-13 20:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:24.673051442 +0000 UTC m=+43.407812348" watchObservedRunningTime="2025-02-13 20:10:24.699818825 +0000 UTC m=+43.434579731" Feb 13 20:10:24.745817 systemd-networkd[1370]: cali5d1e52d4fbb: Link UP Feb 13 20:10:24.746039 systemd-networkd[1370]: cali5d1e52d4fbb: Gained carrier Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.573 [INFO][4561] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0 calico-kube-controllers-5d6457cb66- calico-system 8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437 787 0 2025-02-13 20:09:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d6457cb66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-1-c-c4549fc0d2 calico-kube-controllers-5d6457cb66-sszpn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5d1e52d4fbb [] []}} ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Namespace="calico-system" Pod="calico-kube-controllers-5d6457cb66-sszpn" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.573 [INFO][4561] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Namespace="calico-system" Pod="calico-kube-controllers-5d6457cb66-sszpn" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.625 [INFO][4572] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" HandleID="k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.645 [INFO][4572] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" HandleID="k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-1-c-c4549fc0d2", "pod":"calico-kube-controllers-5d6457cb66-sszpn", "timestamp":"2025-02-13 20:10:24.625484305 +0000 UTC"}, Hostname:"ci-4081-3-1-c-c4549fc0d2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.645 [INFO][4572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.645 [INFO][4572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.646 [INFO][4572] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-c-c4549fc0d2' Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.651 [INFO][4572] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.661 [INFO][4572] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.673 [INFO][4572] ipam/ipam.go 489: Trying affinity for 192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.685 [INFO][4572] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.693 [INFO][4572] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.693 [INFO][4572] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.696 [INFO][4572] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9 Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.712 [INFO][4572] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.731 [INFO][4572] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.132/26] block=192.168.17.128/26 handle="k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.731 [INFO][4572] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.132/26] handle="k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.731 [INFO][4572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:24.766055 containerd[1480]: 2025-02-13 20:10:24.731 [INFO][4572] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.132/26] IPv6=[] ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" HandleID="k8s-pod-network.4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.766769 containerd[1480]: 2025-02-13 20:10:24.736 [INFO][4561] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Namespace="calico-system" Pod="calico-kube-controllers-5d6457cb66-sszpn" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0", GenerateName:"calico-kube-controllers-5d6457cb66-", Namespace:"calico-system", SelfLink:"", UID:"8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6457cb66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"", Pod:"calico-kube-controllers-5d6457cb66-sszpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d1e52d4fbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:24.766769 containerd[1480]: 2025-02-13 20:10:24.736 [INFO][4561] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.132/32] ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Namespace="calico-system" Pod="calico-kube-controllers-5d6457cb66-sszpn" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.766769 containerd[1480]: 2025-02-13 20:10:24.736 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d1e52d4fbb ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Namespace="calico-system" Pod="calico-kube-controllers-5d6457cb66-sszpn" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.766769 containerd[1480]: 2025-02-13 20:10:24.744 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Namespace="calico-system" Pod="calico-kube-controllers-5d6457cb66-sszpn" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.766769 containerd[1480]: 2025-02-13 20:10:24.745 [INFO][4561] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Namespace="calico-system" Pod="calico-kube-controllers-5d6457cb66-sszpn" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0", GenerateName:"calico-kube-controllers-5d6457cb66-", Namespace:"calico-system", SelfLink:"", UID:"8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6457cb66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9", Pod:"calico-kube-controllers-5d6457cb66-sszpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d1e52d4fbb", MAC:"4e:50:bd:5b:67:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:24.766769 containerd[1480]: 2025-02-13 20:10:24.762 [INFO][4561] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9" Namespace="calico-system" Pod="calico-kube-controllers-5d6457cb66-sszpn" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:24.797879 containerd[1480]: time="2025-02-13T20:10:24.797622489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:24.797879 containerd[1480]: time="2025-02-13T20:10:24.797694050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:24.797879 containerd[1480]: time="2025-02-13T20:10:24.797729891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:24.798167 containerd[1480]: time="2025-02-13T20:10:24.798086135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:24.820046 systemd[1]: Started cri-containerd-4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9.scope - libcontainer container 4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9. Feb 13 20:10:24.862246 containerd[1480]: time="2025-02-13T20:10:24.862129378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6457cb66-sszpn,Uid:8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9\"" Feb 13 20:10:24.889663 systemd[1]: run-netns-cni\x2df7b51cbc\x2d0d5f\x2d1c6d\x2d7040\x2d6aa3ea796862.mount: Deactivated successfully. Feb 13 20:10:25.245269 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Feb 13 20:10:25.400777 containerd[1480]: time="2025-02-13T20:10:25.400170326Z" level=info msg="StopPodSandbox for \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\"" Feb 13 20:10:25.400777 containerd[1480]: time="2025-02-13T20:10:25.400626211Z" level=info msg="StopPodSandbox for \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\"" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.488 [INFO][4663] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.488 [INFO][4663] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" iface="eth0" netns="/var/run/netns/cni-209de9fe-c217-9fdd-3d5f-f5ef4b102936" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.489 [INFO][4663] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" iface="eth0" netns="/var/run/netns/cni-209de9fe-c217-9fdd-3d5f-f5ef4b102936" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.489 [INFO][4663] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" iface="eth0" netns="/var/run/netns/cni-209de9fe-c217-9fdd-3d5f-f5ef4b102936" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.489 [INFO][4663] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.489 [INFO][4663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.527 [INFO][4682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.527 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.527 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.538 [WARNING][4682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.538 [INFO][4682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.541 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:25.549707 containerd[1480]: 2025-02-13 20:10:25.547 [INFO][4663] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:25.553116 containerd[1480]: time="2025-02-13T20:10:25.552846622Z" level=info msg="TearDown network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\" successfully" Feb 13 20:10:25.553116 containerd[1480]: time="2025-02-13T20:10:25.552918543Z" level=info msg="StopPodSandbox for \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\" returns successfully" Feb 13 20:10:25.556025 systemd[1]: run-netns-cni\x2d209de9fe\x2dc217\x2d9fdd\x2d3d5f\x2df5ef4b102936.mount: Deactivated successfully. Feb 13 20:10:25.558188 containerd[1480]: time="2025-02-13T20:10:25.556770147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfbrs,Uid:e49c3cb5-faf1-41f3-bccd-16c39f19a201,Namespace:calico-system,Attempt:1,}" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.480 [INFO][4670] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.481 [INFO][4670] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" iface="eth0" netns="/var/run/netns/cni-0bd8ea07-7843-d08a-e5b5-f7179a4e9f9c" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.481 [INFO][4670] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" iface="eth0" netns="/var/run/netns/cni-0bd8ea07-7843-d08a-e5b5-f7179a4e9f9c" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.482 [INFO][4670] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" iface="eth0" netns="/var/run/netns/cni-0bd8ea07-7843-d08a-e5b5-f7179a4e9f9c" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.482 [INFO][4670] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.482 [INFO][4670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.530 [INFO][4678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.530 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.542 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.563 [WARNING][4678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.563 [INFO][4678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.566 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:25.570593 containerd[1480]: 2025-02-13 20:10:25.568 [INFO][4670] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:25.571585 containerd[1480]: time="2025-02-13T20:10:25.571025989Z" level=info msg="TearDown network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\" successfully" Feb 13 20:10:25.571585 containerd[1480]: time="2025-02-13T20:10:25.571065229Z" level=info msg="StopPodSandbox for \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\" returns successfully" Feb 13 20:10:25.573644 containerd[1480]: time="2025-02-13T20:10:25.573361015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559fcc6975-znb7c,Uid:d22c4ac8-b3bd-4eb9-80af-676921861f03,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:10:25.574385 systemd[1]: run-netns-cni\x2d0bd8ea07\x2d7843\x2dd08a\x2de5b5\x2df7179a4e9f9c.mount: Deactivated successfully. Feb 13 20:10:25.753732 systemd-networkd[1370]: cali8146fb1f975: Gained IPv6LL Feb 13 20:10:25.831187 systemd-networkd[1370]: calic4cb1e5dd92: Link UP Feb 13 20:10:25.832043 systemd-networkd[1370]: calic4cb1e5dd92: Gained carrier Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.668 [INFO][4692] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0 csi-node-driver- calico-system e49c3cb5-faf1-41f3-bccd-16c39f19a201 804 0 2025-02-13 20:09:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-1-c-c4549fc0d2 csi-node-driver-dfbrs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4cb1e5dd92 [] []}} ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Namespace="calico-system" Pod="csi-node-driver-dfbrs" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.669 [INFO][4692] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Namespace="calico-system" Pod="csi-node-driver-dfbrs" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.748 [INFO][4716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" HandleID="k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.770 [INFO][4716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" HandleID="k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-1-c-c4549fc0d2", "pod":"csi-node-driver-dfbrs", "timestamp":"2025-02-13 20:10:25.748466926 +0000 UTC"}, Hostname:"ci-4081-3-1-c-c4549fc0d2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.771 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.771 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.771 [INFO][4716] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-c-c4549fc0d2' Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.775 [INFO][4716] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.784 [INFO][4716] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.791 [INFO][4716] ipam/ipam.go 489: Trying affinity for 192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.794 [INFO][4716] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.799 [INFO][4716] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.800 [INFO][4716] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.802 [INFO][4716] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.809 [INFO][4716] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.822 [INFO][4716] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.133/26] block=192.168.17.128/26 handle="k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.822 [INFO][4716] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.133/26] handle="k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.822 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:25.853873 containerd[1480]: 2025-02-13 20:10:25.823 [INFO][4716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.133/26] IPv6=[] ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" HandleID="k8s-pod-network.2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.855275 containerd[1480]: 2025-02-13 20:10:25.825 [INFO][4692] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Namespace="calico-system" Pod="csi-node-driver-dfbrs" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e49c3cb5-faf1-41f3-bccd-16c39f19a201", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"", Pod:"csi-node-driver-dfbrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4cb1e5dd92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:25.855275 containerd[1480]: 2025-02-13 20:10:25.826 [INFO][4692] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.133/32] ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Namespace="calico-system" Pod="csi-node-driver-dfbrs" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.855275 containerd[1480]: 2025-02-13 20:10:25.826 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4cb1e5dd92 ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Namespace="calico-system" Pod="csi-node-driver-dfbrs" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.855275 containerd[1480]: 2025-02-13 20:10:25.832 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Namespace="calico-system" Pod="csi-node-driver-dfbrs" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.855275 containerd[1480]: 2025-02-13 20:10:25.834 [INFO][4692] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Namespace="calico-system" Pod="csi-node-driver-dfbrs" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e49c3cb5-faf1-41f3-bccd-16c39f19a201", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f", Pod:"csi-node-driver-dfbrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4cb1e5dd92", MAC:"be:81:87:a4:9a:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:25.855275 containerd[1480]: 2025-02-13 20:10:25.850 [INFO][4692] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f" Namespace="calico-system" Pod="csi-node-driver-dfbrs" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:25.890601 containerd[1480]: time="2025-02-13T20:10:25.889180607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:25.894163 containerd[1480]: time="2025-02-13T20:10:25.892630486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:25.894163 containerd[1480]: time="2025-02-13T20:10:25.892743847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:25.894163 containerd[1480]: time="2025-02-13T20:10:25.893013650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:25.951878 systemd[1]: Started cri-containerd-2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f.scope - libcontainer container 2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f. Feb 13 20:10:26.021816 systemd-networkd[1370]: cali9361c3da6b8: Link UP Feb 13 20:10:26.023062 systemd-networkd[1370]: cali9361c3da6b8: Gained carrier Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.669 [INFO][4703] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0 calico-apiserver-559fcc6975- calico-apiserver d22c4ac8-b3bd-4eb9-80af-676921861f03 803 0 2025-02-13 20:09:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:559fcc6975 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-1-c-c4549fc0d2 calico-apiserver-559fcc6975-znb7c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9361c3da6b8 [] []}} ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-znb7c" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.669 [INFO][4703] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-znb7c" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.756 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" HandleID="k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.778 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" HandleID="k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000399a10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-1-c-c4549fc0d2", "pod":"calico-apiserver-559fcc6975-znb7c", "timestamp":"2025-02-13 20:10:25.756278855 +0000 UTC"}, Hostname:"ci-4081-3-1-c-c4549fc0d2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.778 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.823 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.823 [INFO][4715] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-c-c4549fc0d2' Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.889 [INFO][4715] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.918 [INFO][4715] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.944 [INFO][4715] ipam/ipam.go 489: Trying affinity for 192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.955 [INFO][4715] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.961 [INFO][4715] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.962 [INFO][4715] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.976 [INFO][4715] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:25.998 [INFO][4715] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:26.013 [INFO][4715] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.134/26] block=192.168.17.128/26 handle="k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:26.013 [INFO][4715] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.134/26] handle="k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" host="ci-4081-3-1-c-c4549fc0d2" Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:26.014 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:26.055339 containerd[1480]: 2025-02-13 20:10:26.014 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.134/26] IPv6=[] ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" HandleID="k8s-pod-network.04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:26.056072 containerd[1480]: 2025-02-13 20:10:26.016 [INFO][4703] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-znb7c" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0", GenerateName:"calico-apiserver-559fcc6975-", Namespace:"calico-apiserver", SelfLink:"", UID:"d22c4ac8-b3bd-4eb9-80af-676921861f03", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559fcc6975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"", Pod:"calico-apiserver-559fcc6975-znb7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9361c3da6b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:26.056072 containerd[1480]: 2025-02-13 20:10:26.016 [INFO][4703] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.134/32] ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-znb7c" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:26.056072 containerd[1480]: 2025-02-13 20:10:26.016 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9361c3da6b8 ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-znb7c" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:26.056072 containerd[1480]: 2025-02-13 20:10:26.023 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-znb7c" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:26.056072 containerd[1480]: 2025-02-13 20:10:26.025 [INFO][4703] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-znb7c" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0", GenerateName:"calico-apiserver-559fcc6975-", Namespace:"calico-apiserver", SelfLink:"", UID:"d22c4ac8-b3bd-4eb9-80af-676921861f03", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559fcc6975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa", Pod:"calico-apiserver-559fcc6975-znb7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9361c3da6b8", MAC:"9e:d0:41:4a:66:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:26.056072 containerd[1480]: 2025-02-13 20:10:26.049 [INFO][4703] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa" Namespace="calico-apiserver" Pod="calico-apiserver-559fcc6975-znb7c" WorkloadEndpoint="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:26.077937 containerd[1480]: time="2025-02-13T20:10:26.077470593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfbrs,Uid:e49c3cb5-faf1-41f3-bccd-16c39f19a201,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f\"" Feb 13 20:10:26.098363 containerd[1480]: time="2025-02-13T20:10:26.098119150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:26.098493 containerd[1480]: time="2025-02-13T20:10:26.098277512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:26.098493 containerd[1480]: time="2025-02-13T20:10:26.098293472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:26.099966 containerd[1480]: time="2025-02-13T20:10:26.098531754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:26.130918 systemd[1]: Started cri-containerd-04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa.scope - libcontainer container 04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa. Feb 13 20:10:26.170652 containerd[1480]: time="2025-02-13T20:10:26.170397297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559fcc6975-znb7c,Uid:d22c4ac8-b3bd-4eb9-80af-676921861f03,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa\"" Feb 13 20:10:26.649959 systemd-networkd[1370]: cali5d1e52d4fbb: Gained IPv6LL Feb 13 20:10:27.418006 systemd-networkd[1370]: cali9361c3da6b8: Gained IPv6LL Feb 13 20:10:27.481897 systemd-networkd[1370]: calic4cb1e5dd92: Gained IPv6LL Feb 13 20:10:27.549417 containerd[1480]: time="2025-02-13T20:10:27.549340839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:27.551049 containerd[1480]: time="2025-02-13T20:10:27.551009618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 20:10:27.554491 containerd[1480]: time="2025-02-13T20:10:27.554419097Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:27.560814 containerd[1480]: time="2025-02-13T20:10:27.560201484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:27.561640 containerd[1480]: time="2025-02-13T20:10:27.561501219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 5.677593373s" Feb 13 20:10:27.561748 containerd[1480]: time="2025-02-13T20:10:27.561639060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 20:10:27.574696 containerd[1480]: time="2025-02-13T20:10:27.574631490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:10:27.600390 containerd[1480]: time="2025-02-13T20:10:27.600330666Z" level=info msg="CreateContainer within sandbox \"09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:10:27.630403 containerd[1480]: time="2025-02-13T20:10:27.630311691Z" level=info msg="CreateContainer within sandbox \"09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"28b0ff7d0bb89800c4919f75d16cdc3971b9534f69d75dabcdd855d5f6c7d1b4\"" Feb 13 20:10:27.631789 containerd[1480]: time="2025-02-13T20:10:27.631699347Z" level=info msg="StartContainer for \"28b0ff7d0bb89800c4919f75d16cdc3971b9534f69d75dabcdd855d5f6c7d1b4\"" Feb 13 20:10:27.688020 systemd[1]: Started cri-containerd-28b0ff7d0bb89800c4919f75d16cdc3971b9534f69d75dabcdd855d5f6c7d1b4.scope - libcontainer container 28b0ff7d0bb89800c4919f75d16cdc3971b9534f69d75dabcdd855d5f6c7d1b4. Feb 13 20:10:27.737680 containerd[1480]: time="2025-02-13T20:10:27.737461685Z" level=info msg="StartContainer for \"28b0ff7d0bb89800c4919f75d16cdc3971b9534f69d75dabcdd855d5f6c7d1b4\" returns successfully" Feb 13 20:10:28.738870 kubelet[2695]: I0213 20:10:28.738724 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-559fcc6975-jdpl4" podStartSLOduration=27.045773278 podStartE2EDuration="32.738705067s" podCreationTimestamp="2025-02-13 20:09:56 +0000 UTC" firstStartedPulling="2025-02-13 20:10:21.881322856 +0000 UTC m=+40.616083762" lastFinishedPulling="2025-02-13 20:10:27.574254565 +0000 UTC m=+46.309015551" observedRunningTime="2025-02-13 20:10:28.732467155 +0000 UTC m=+47.467228061" watchObservedRunningTime="2025-02-13 20:10:28.738705067 +0000 UTC m=+47.473465933" Feb 13 20:10:30.531463 containerd[1480]: time="2025-02-13T20:10:30.531334974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:30.532916 containerd[1480]: time="2025-02-13T20:10:30.532424306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 20:10:30.533896 containerd[1480]: time="2025-02-13T20:10:30.533699041Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:30.537611 containerd[1480]: time="2025-02-13T20:10:30.537486206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:30.539174 containerd[1480]: time="2025-02-13T20:10:30.538504858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.963564405s" Feb 13 20:10:30.539174 containerd[1480]: time="2025-02-13T20:10:30.538574459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 20:10:30.542330 containerd[1480]: time="2025-02-13T20:10:30.540287639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:10:30.581458 containerd[1480]: time="2025-02-13T20:10:30.581396080Z" level=info msg="CreateContainer within sandbox \"4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:10:30.598934 containerd[1480]: time="2025-02-13T20:10:30.598848005Z" level=info msg="CreateContainer within sandbox \"4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586\"" Feb 13 20:10:30.600604 containerd[1480]: time="2025-02-13T20:10:30.599942938Z" level=info msg="StartContainer for \"839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586\"" Feb 13 20:10:30.645738 systemd[1]: Started cri-containerd-839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586.scope - libcontainer container 839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586. Feb 13 20:10:30.716722 containerd[1480]: time="2025-02-13T20:10:30.716007018Z" level=info msg="StartContainer for \"839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586\" returns successfully" Feb 13 20:10:31.756685 kubelet[2695]: I0213 20:10:31.755956 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d6457cb66-sszpn" podStartSLOduration=29.079685612 podStartE2EDuration="34.755933131s" podCreationTimestamp="2025-02-13 20:09:57 +0000 UTC" firstStartedPulling="2025-02-13 20:10:24.863846037 +0000 UTC m=+43.598606943" lastFinishedPulling="2025-02-13 20:10:30.540093556 +0000 UTC m=+49.274854462" observedRunningTime="2025-02-13 20:10:31.755457525 +0000 UTC m=+50.490218431" watchObservedRunningTime="2025-02-13 20:10:31.755933131 +0000 UTC m=+50.490693997" Feb 13 20:10:32.595493 containerd[1480]: time="2025-02-13T20:10:32.595433857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:32.596720 containerd[1480]: time="2025-02-13T20:10:32.596677472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 20:10:32.599239 containerd[1480]: time="2025-02-13T20:10:32.598276131Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:32.601843 containerd[1480]: time="2025-02-13T20:10:32.601796413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:32.602498 containerd[1480]: time="2025-02-13T20:10:32.602457861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.062123061s" Feb 13 20:10:32.602498 containerd[1480]: time="2025-02-13T20:10:32.602494341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 20:10:32.605097 containerd[1480]: time="2025-02-13T20:10:32.604901529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:10:32.617574 containerd[1480]: time="2025-02-13T20:10:32.617505719Z" level=info msg="CreateContainer within sandbox \"2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:10:32.642366 containerd[1480]: time="2025-02-13T20:10:32.642196091Z" level=info msg="CreateContainer within sandbox \"2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"32293e24073447bd0d16a94d503631c80b7faa0491a329886641cead1aa0f98d\"" Feb 13 20:10:32.643520 containerd[1480]: time="2025-02-13T20:10:32.643372065Z" level=info msg="StartContainer for \"32293e24073447bd0d16a94d503631c80b7faa0491a329886641cead1aa0f98d\"" Feb 13 20:10:32.688778 systemd[1]: Started cri-containerd-32293e24073447bd0d16a94d503631c80b7faa0491a329886641cead1aa0f98d.scope - libcontainer container 32293e24073447bd0d16a94d503631c80b7faa0491a329886641cead1aa0f98d. Feb 13 20:10:32.721669 containerd[1480]: time="2025-02-13T20:10:32.721481430Z" level=info msg="StartContainer for \"32293e24073447bd0d16a94d503631c80b7faa0491a329886641cead1aa0f98d\" returns successfully" Feb 13 20:10:33.071655 containerd[1480]: time="2025-02-13T20:10:33.069456115Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:33.071655 containerd[1480]: time="2025-02-13T20:10:33.070168283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:10:33.075406 containerd[1480]: time="2025-02-13T20:10:33.075296384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 468.31235ms" Feb 13 20:10:33.075657 containerd[1480]: time="2025-02-13T20:10:33.075632908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 20:10:33.078076 containerd[1480]: time="2025-02-13T20:10:33.078008816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:10:33.080204 containerd[1480]: time="2025-02-13T20:10:33.080105401Z" level=info msg="CreateContainer within sandbox \"04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:10:33.106497 containerd[1480]: time="2025-02-13T20:10:33.106441995Z" level=info msg="CreateContainer within sandbox \"04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c56886fb1f6ab6c03ab7ad7d8575a4f6f2a48a667f60bda64f89c9c0773d71d2\"" Feb 13 20:10:33.107292 containerd[1480]: time="2025-02-13T20:10:33.107250044Z" level=info msg="StartContainer for \"c56886fb1f6ab6c03ab7ad7d8575a4f6f2a48a667f60bda64f89c9c0773d71d2\"" Feb 13 20:10:33.147758 systemd[1]: Started cri-containerd-c56886fb1f6ab6c03ab7ad7d8575a4f6f2a48a667f60bda64f89c9c0773d71d2.scope - libcontainer container c56886fb1f6ab6c03ab7ad7d8575a4f6f2a48a667f60bda64f89c9c0773d71d2. Feb 13 20:10:33.186791 containerd[1480]: time="2025-02-13T20:10:33.186624109Z" level=info msg="StartContainer for \"c56886fb1f6ab6c03ab7ad7d8575a4f6f2a48a667f60bda64f89c9c0773d71d2\" returns successfully" Feb 13 20:10:33.761178 kubelet[2695]: I0213 20:10:33.761098 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-559fcc6975-znb7c" podStartSLOduration=30.856893546 podStartE2EDuration="37.761048425s" podCreationTimestamp="2025-02-13 20:09:56 +0000 UTC" firstStartedPulling="2025-02-13 20:10:26.172725524 +0000 UTC m=+44.907486390" lastFinishedPulling="2025-02-13 20:10:33.076880403 +0000 UTC m=+51.811641269" observedRunningTime="2025-02-13 20:10:33.760450098 +0000 UTC m=+52.495211004" watchObservedRunningTime="2025-02-13 20:10:33.761048425 +0000 UTC m=+52.495809331" Feb 13 20:10:34.743389 kubelet[2695]: I0213 20:10:34.743357 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:10:35.969562 containerd[1480]: time="2025-02-13T20:10:35.967422089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:35.969562 containerd[1480]: time="2025-02-13T20:10:35.969086589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 20:10:35.970333 containerd[1480]: time="2025-02-13T20:10:35.970280844Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:35.982981 containerd[1480]: time="2025-02-13T20:10:35.982912876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 2.904851018s" Feb 13 20:10:35.983183 containerd[1480]: time="2025-02-13T20:10:35.983163399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 20:10:35.987771 containerd[1480]: time="2025-02-13T20:10:35.987719733Z" level=info msg="CreateContainer within sandbox \"2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:10:35.994195 containerd[1480]: time="2025-02-13T20:10:35.994125610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:36.009771 containerd[1480]: time="2025-02-13T20:10:36.009679238Z" level=info msg="CreateContainer within sandbox \"2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"59d0ec92bb8eed8befa3de17826a514df45ee81fd219db80c28429d06175f535\"" Feb 13 20:10:36.012697 containerd[1480]: time="2025-02-13T20:10:36.010909292Z" level=info msg="StartContainer for \"59d0ec92bb8eed8befa3de17826a514df45ee81fd219db80c28429d06175f535\"" Feb 13 20:10:36.067822 systemd[1]: Started cri-containerd-59d0ec92bb8eed8befa3de17826a514df45ee81fd219db80c28429d06175f535.scope - libcontainer container 59d0ec92bb8eed8befa3de17826a514df45ee81fd219db80c28429d06175f535. Feb 13 20:10:36.104937 containerd[1480]: time="2025-02-13T20:10:36.104857146Z" level=info msg="StartContainer for \"59d0ec92bb8eed8befa3de17826a514df45ee81fd219db80c28429d06175f535\" returns successfully" Feb 13 20:10:36.518568 kubelet[2695]: I0213 20:10:36.518478 2695 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:10:36.518568 kubelet[2695]: I0213 20:10:36.518574 2695 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:10:41.398881 containerd[1480]: time="2025-02-13T20:10:41.398796271Z" level=info msg="StopPodSandbox for \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\"" Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.456 [WARNING][5101] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0", GenerateName:"calico-apiserver-559fcc6975-", Namespace:"calico-apiserver", SelfLink:"", UID:"d22c4ac8-b3bd-4eb9-80af-676921861f03", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559fcc6975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa", Pod:"calico-apiserver-559fcc6975-znb7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9361c3da6b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.456 [INFO][5101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.456 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" iface="eth0" netns="" Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.456 [INFO][5101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.456 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.480 [INFO][5107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.481 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.481 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.496 [WARNING][5107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.496 [INFO][5107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.501 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:41.506815 containerd[1480]: 2025-02-13 20:10:41.505 [INFO][5101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:41.507804 containerd[1480]: time="2025-02-13T20:10:41.506871242Z" level=info msg="TearDown network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\" successfully" Feb 13 20:10:41.507804 containerd[1480]: time="2025-02-13T20:10:41.506897642Z" level=info msg="StopPodSandbox for \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\" returns successfully" Feb 13 20:10:41.508359 containerd[1480]: time="2025-02-13T20:10:41.508318100Z" level=info msg="RemovePodSandbox for \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\"" Feb 13 20:10:41.508359 containerd[1480]: time="2025-02-13T20:10:41.508358980Z" level=info msg="Forcibly stopping sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\"" Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.554 [WARNING][5125] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0", GenerateName:"calico-apiserver-559fcc6975-", Namespace:"calico-apiserver", SelfLink:"", UID:"d22c4ac8-b3bd-4eb9-80af-676921861f03", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559fcc6975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"04836565d462825d50be9518ae85e4bbe3bdc97f7b25af2937537a79deb4aefa", Pod:"calico-apiserver-559fcc6975-znb7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9361c3da6b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.554 [INFO][5125] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.554 [INFO][5125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" iface="eth0" netns="" Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.554 [INFO][5125] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.554 [INFO][5125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.580 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.580 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.580 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.592 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.592 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" HandleID="k8s-pod-network.cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--znb7c-eth0" Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.595 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:41.598346 containerd[1480]: 2025-02-13 20:10:41.596 [INFO][5125] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63" Feb 13 20:10:41.598877 containerd[1480]: time="2025-02-13T20:10:41.598376568Z" level=info msg="TearDown network for sandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\" successfully" Feb 13 20:10:41.602732 containerd[1480]: time="2025-02-13T20:10:41.602645741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:41.603256 containerd[1480]: time="2025-02-13T20:10:41.602742382Z" level=info msg="RemovePodSandbox \"cfe42387fe908d1a4bf28b8e69c3b1c0253c2b6cd4e053a30e76457033c1ab63\" returns successfully" Feb 13 20:10:41.603376 containerd[1480]: time="2025-02-13T20:10:41.603353350Z" level=info msg="StopPodSandbox for \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\"" Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.663 [WARNING][5150] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ecd3ea8a-4017-49b0-914a-222a63032a3d", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209", Pod:"coredns-668d6bf9bc-zd9bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8146fb1f975", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.664 [INFO][5150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.664 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" iface="eth0" netns="" Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.664 [INFO][5150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.664 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.694 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.695 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.695 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.707 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.707 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.710 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:41.714212 containerd[1480]: 2025-02-13 20:10:41.712 [INFO][5150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:41.714212 containerd[1480]: time="2025-02-13T20:10:41.713773029Z" level=info msg="TearDown network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\" successfully" Feb 13 20:10:41.714212 containerd[1480]: time="2025-02-13T20:10:41.713799950Z" level=info msg="StopPodSandbox for \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\" returns successfully" Feb 13 20:10:41.717115 containerd[1480]: time="2025-02-13T20:10:41.716713265Z" level=info msg="RemovePodSandbox for \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\"" Feb 13 20:10:41.717115 containerd[1480]: time="2025-02-13T20:10:41.716772146Z" level=info msg="Forcibly stopping sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\"" Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.767 [WARNING][5177] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ecd3ea8a-4017-49b0-914a-222a63032a3d", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"30ce93a2a8e9c02063cf4cb4b823d2610eec9cf941beea8030185d6861a53209", Pod:"coredns-668d6bf9bc-zd9bb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8146fb1f975", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.767 [INFO][5177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.767 [INFO][5177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" iface="eth0" netns="" Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.768 [INFO][5177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.768 [INFO][5177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.792 [INFO][5183] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.792 [INFO][5183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.792 [INFO][5183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.807 [WARNING][5183] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.807 [INFO][5183] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" HandleID="k8s-pod-network.f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zd9bb-eth0" Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.811 [INFO][5183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:41.816029 containerd[1480]: 2025-02-13 20:10:41.813 [INFO][5177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931" Feb 13 20:10:41.816029 containerd[1480]: time="2025-02-13T20:10:41.815885726Z" level=info msg="TearDown network for sandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\" successfully" Feb 13 20:10:41.823973 containerd[1480]: time="2025-02-13T20:10:41.823906425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:41.824112 containerd[1480]: time="2025-02-13T20:10:41.823991306Z" level=info msg="RemovePodSandbox \"f53d9079a007e35392af851bf9f4f6a080d6951abf5bebc1f0356128d96a3931\" returns successfully" Feb 13 20:10:41.825234 containerd[1480]: time="2025-02-13T20:10:41.824705075Z" level=info msg="StopPodSandbox for \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\"" Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.877 [WARNING][5201] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c052e617-ee7a-4d95-8541-47323a0ca995", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e", Pod:"coredns-668d6bf9bc-zppg6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66063bd2ff7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.877 [INFO][5201] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.877 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" iface="eth0" netns="" Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.877 [INFO][5201] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.877 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.899 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.899 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.899 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.920 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.920 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.922 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:41.926926 containerd[1480]: 2025-02-13 20:10:41.924 [INFO][5201] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:41.928135 containerd[1480]: time="2025-02-13T20:10:41.927701383Z" level=info msg="TearDown network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\" successfully" Feb 13 20:10:41.928135 containerd[1480]: time="2025-02-13T20:10:41.927746664Z" level=info msg="StopPodSandbox for \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\" returns successfully" Feb 13 20:10:41.930205 containerd[1480]: time="2025-02-13T20:10:41.930104853Z" level=info msg="RemovePodSandbox for \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\"" Feb 13 20:10:41.930205 containerd[1480]: time="2025-02-13T20:10:41.930197694Z" level=info msg="Forcibly stopping sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\"" Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:41.984 [WARNING][5226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c052e617-ee7a-4d95-8541-47323a0ca995", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"48de3b79b5329a65a49806f142196e6d63248e658c565222983406880d90c93e", Pod:"coredns-668d6bf9bc-zppg6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66063bd2ff7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:41.984 [INFO][5226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:41.984 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" iface="eth0" netns="" Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:41.984 [INFO][5226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:41.984 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:42.011 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:42.011 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:42.011 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:42.026 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:42.026 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" HandleID="k8s-pod-network.14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-coredns--668d6bf9bc--zppg6-eth0" Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:42.035 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:42.039387 containerd[1480]: 2025-02-13 20:10:42.037 [INFO][5226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee" Feb 13 20:10:42.040118 containerd[1480]: time="2025-02-13T20:10:42.039662923Z" level=info msg="TearDown network for sandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\" successfully" Feb 13 20:10:42.045331 containerd[1480]: time="2025-02-13T20:10:42.045105030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:42.045331 containerd[1480]: time="2025-02-13T20:10:42.045212952Z" level=info msg="RemovePodSandbox \"14220a1bc53a660694a160394842737a05f9fc083987d99ae7cdeb707a5c7bee\" returns successfully" Feb 13 20:10:42.046258 containerd[1480]: time="2025-02-13T20:10:42.046052762Z" level=info msg="StopPodSandbox for \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\"" Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.092 [WARNING][5252] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0", GenerateName:"calico-apiserver-559fcc6975-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec425091-4bab-4f95-b458-e02d7376e8e9", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559fcc6975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9", Pod:"calico-apiserver-559fcc6975-jdpl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ab9f708336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.093 [INFO][5252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.093 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" iface="eth0" netns="" Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.093 [INFO][5252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.093 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.119 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.119 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.119 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.131 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.131 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.134 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:42.137411 containerd[1480]: 2025-02-13 20:10:42.135 [INFO][5252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:42.137411 containerd[1480]: time="2025-02-13T20:10:42.137334570Z" level=info msg="TearDown network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\" successfully" Feb 13 20:10:42.137411 containerd[1480]: time="2025-02-13T20:10:42.137365490Z" level=info msg="StopPodSandbox for \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\" returns successfully" Feb 13 20:10:42.138775 containerd[1480]: time="2025-02-13T20:10:42.138723107Z" level=info msg="RemovePodSandbox for \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\"" Feb 13 20:10:42.138775 containerd[1480]: time="2025-02-13T20:10:42.138765468Z" level=info msg="Forcibly stopping sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\"" Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.190 [WARNING][5277] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0", GenerateName:"calico-apiserver-559fcc6975-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec425091-4bab-4f95-b458-e02d7376e8e9", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559fcc6975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"09f3b52748a960f9994ad8328814fcd117e89f2ee6f509c753ed33652520e9e9", Pod:"calico-apiserver-559fcc6975-jdpl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ab9f708336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.190 [INFO][5277] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.190 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" iface="eth0" netns="" Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.190 [INFO][5277] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.190 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.214 [INFO][5283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.214 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.214 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.224 [WARNING][5283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.224 [INFO][5283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" HandleID="k8s-pod-network.fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--apiserver--559fcc6975--jdpl4-eth0" Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.227 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:42.231846 containerd[1480]: 2025-02-13 20:10:42.229 [INFO][5277] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf" Feb 13 20:10:42.231846 containerd[1480]: time="2025-02-13T20:10:42.231037328Z" level=info msg="TearDown network for sandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\" successfully" Feb 13 20:10:42.237782 containerd[1480]: time="2025-02-13T20:10:42.237727330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:42.238109 containerd[1480]: time="2025-02-13T20:10:42.237988014Z" level=info msg="RemovePodSandbox \"fcd34723f82845f1a922898eb8945fa178c8a61a4c2766370c508ef82bb1aebf\" returns successfully" Feb 13 20:10:42.238990 containerd[1480]: time="2025-02-13T20:10:42.238604101Z" level=info msg="StopPodSandbox for \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\"" Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.287 [WARNING][5301] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e49c3cb5-faf1-41f3-bccd-16c39f19a201", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f", Pod:"csi-node-driver-dfbrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4cb1e5dd92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.287 [INFO][5301] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.287 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" iface="eth0" netns="" Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.287 [INFO][5301] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.287 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.314 [INFO][5307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.314 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.314 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.328 [WARNING][5307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.328 [INFO][5307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.331 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:42.335674 containerd[1480]: 2025-02-13 20:10:42.333 [INFO][5301] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:42.335674 containerd[1480]: time="2025-02-13T20:10:42.335613020Z" level=info msg="TearDown network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\" successfully" Feb 13 20:10:42.336143 containerd[1480]: time="2025-02-13T20:10:42.335684461Z" level=info msg="StopPodSandbox for \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\" returns successfully" Feb 13 20:10:42.338370 containerd[1480]: time="2025-02-13T20:10:42.336878556Z" level=info msg="RemovePodSandbox for \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\"" Feb 13 20:10:42.338370 containerd[1480]: time="2025-02-13T20:10:42.337140519Z" level=info msg="Forcibly stopping sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\"" Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.385 [WARNING][5326] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e49c3cb5-faf1-41f3-bccd-16c39f19a201", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"2f48c8f08460c11f38132a5d28652a7f458c2ba3b0d3713dd2d697ff9e77dd9f", Pod:"csi-node-driver-dfbrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4cb1e5dd92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.385 [INFO][5326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.385 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" iface="eth0" netns="" Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.385 [INFO][5326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.385 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.408 [INFO][5333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.408 [INFO][5333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.408 [INFO][5333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.420 [WARNING][5333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.420 [INFO][5333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" HandleID="k8s-pod-network.d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-csi--node--driver--dfbrs-eth0" Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.423 [INFO][5333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:42.427327 containerd[1480]: 2025-02-13 20:10:42.425 [INFO][5326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc" Feb 13 20:10:42.427327 containerd[1480]: time="2025-02-13T20:10:42.426775026Z" level=info msg="TearDown network for sandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\" successfully" Feb 13 20:10:42.432032 containerd[1480]: time="2025-02-13T20:10:42.431982971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:42.432693 containerd[1480]: time="2025-02-13T20:10:42.432231894Z" level=info msg="RemovePodSandbox \"d7b8250be3d4fa2e6d3a1a21f20c8a6fd4c4b00600a0c820d48b42531e7451cc\" returns successfully" Feb 13 20:10:42.432876 containerd[1480]: time="2025-02-13T20:10:42.432835701Z" level=info msg="StopPodSandbox for \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\"" Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.482 [WARNING][5352] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0", GenerateName:"calico-kube-controllers-5d6457cb66-", Namespace:"calico-system", SelfLink:"", UID:"8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6457cb66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9", Pod:"calico-kube-controllers-5d6457cb66-sszpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d1e52d4fbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.483 [INFO][5352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.483 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" iface="eth0" netns="" Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.483 [INFO][5352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.483 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.511 [INFO][5358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.512 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.512 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.525 [WARNING][5358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.525 [INFO][5358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.528 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:42.533423 containerd[1480]: 2025-02-13 20:10:42.531 [INFO][5352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:42.534408 containerd[1480]: time="2025-02-13T20:10:42.533486265Z" level=info msg="TearDown network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\" successfully" Feb 13 20:10:42.534408 containerd[1480]: time="2025-02-13T20:10:42.533515345Z" level=info msg="StopPodSandbox for \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\" returns successfully" Feb 13 20:10:42.534408 containerd[1480]: time="2025-02-13T20:10:42.534140633Z" level=info msg="RemovePodSandbox for \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\"" Feb 13 20:10:42.534408 containerd[1480]: time="2025-02-13T20:10:42.534181433Z" level=info msg="Forcibly stopping sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\"" Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.583 [WARNING][5376] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0", GenerateName:"calico-kube-controllers-5d6457cb66-", Namespace:"calico-system", SelfLink:"", UID:"8ccd9614-bdb8-4f2a-8ecd-86b9a3d3d437", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6457cb66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-c-c4549fc0d2", ContainerID:"4e22afe887d88646c9e50880d03deb23898eee26b449ee1edcd0886974d311c9", Pod:"calico-kube-controllers-5d6457cb66-sszpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d1e52d4fbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.583 [INFO][5376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.583 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" iface="eth0" netns="" Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.583 [INFO][5376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.583 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.609 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.609 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.609 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.625 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.625 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" HandleID="k8s-pod-network.2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Workload="ci--4081--3--1--c--c4549fc0d2-k8s-calico--kube--controllers--5d6457cb66--sszpn-eth0" Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.628 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:42.631885 containerd[1480]: 2025-02-13 20:10:42.629 [INFO][5376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1" Feb 13 20:10:42.632311 containerd[1480]: time="2025-02-13T20:10:42.631869841Z" level=info msg="TearDown network for sandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\" successfully" Feb 13 20:10:42.637478 containerd[1480]: time="2025-02-13T20:10:42.637275747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:42.637478 containerd[1480]: time="2025-02-13T20:10:42.637410069Z" level=info msg="RemovePodSandbox \"2a9c4f66fc3ea98d1b5c2725e9e04e9c317b4cdeaace6149262282af86fa9ec1\" returns successfully" Feb 13 20:10:50.720033 kubelet[2695]: I0213 20:10:50.719848 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dfbrs" podStartSLOduration=43.818429494 podStartE2EDuration="53.71982669s" podCreationTimestamp="2025-02-13 20:09:57 +0000 UTC" firstStartedPulling="2025-02-13 20:10:26.082496171 +0000 UTC m=+44.817257077" lastFinishedPulling="2025-02-13 20:10:35.983893367 +0000 UTC m=+54.718654273" observedRunningTime="2025-02-13 20:10:36.781554472 +0000 UTC m=+55.516315378" watchObservedRunningTime="2025-02-13 20:10:50.71982669 +0000 UTC m=+69.454587596" Feb 13 20:10:59.963712 systemd[1]: run-containerd-runc-k8s.io-839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586-runc.Tj9m7M.mount: Deactivated successfully. Feb 13 20:11:01.784132 systemd[1]: run-containerd-runc-k8s.io-839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586-runc.uP0jI6.mount: Deactivated successfully. Feb 13 20:11:09.089949 kubelet[2695]: I0213 20:11:09.089004 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:11:31.757427 systemd[1]: run-containerd-runc-k8s.io-839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586-runc.wRPEjw.mount: Deactivated successfully. Feb 13 20:11:59.956529 systemd[1]: run-containerd-runc-k8s.io-839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586-runc.3j3BrT.mount: Deactivated successfully. Feb 13 20:13:31.755160 systemd[1]: run-containerd-runc-k8s.io-839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586-runc.hKXM3q.mount: Deactivated successfully. Feb 13 20:14:31.754870 systemd[1]: run-containerd-runc-k8s.io-839b0a3be2df9ece0cf6e728f774c1aef82d22fbf5a7b4b969c6810befb87586-runc.rIiEEK.mount: Deactivated successfully. Feb 13 20:14:32.724067 systemd[1]: Started sshd@7-78.47.136.246:22-147.75.109.163:51946.service - OpenSSH per-connection server daemon (147.75.109.163:51946). Feb 13 20:14:33.704080 sshd[5905]: Accepted publickey for core from 147.75.109.163 port 51946 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:33.705927 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:33.712348 systemd-logind[1456]: New session 8 of user core. Feb 13 20:14:33.718826 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:14:34.486127 sshd[5905]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:34.492164 systemd[1]: sshd@7-78.47.136.246:22-147.75.109.163:51946.service: Deactivated successfully. Feb 13 20:14:34.496693 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:14:34.501240 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:14:34.503248 systemd-logind[1456]: Removed session 8. Feb 13 20:14:39.663979 systemd[1]: Started sshd@8-78.47.136.246:22-147.75.109.163:38468.service - OpenSSH per-connection server daemon (147.75.109.163:38468). Feb 13 20:14:40.660355 sshd[5923]: Accepted publickey for core from 147.75.109.163 port 38468 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:40.663005 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:40.668966 systemd-logind[1456]: New session 9 of user core. Feb 13 20:14:40.677902 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:14:41.424910 sshd[5923]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:41.431110 systemd[1]: sshd@8-78.47.136.246:22-147.75.109.163:38468.service: Deactivated successfully. Feb 13 20:14:41.438968 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:14:41.444862 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:14:41.446435 systemd-logind[1456]: Removed session 9. Feb 13 20:14:46.601950 systemd[1]: Started sshd@9-78.47.136.246:22-147.75.109.163:38470.service - OpenSSH per-connection server daemon (147.75.109.163:38470). Feb 13 20:14:47.596876 sshd[5943]: Accepted publickey for core from 147.75.109.163 port 38470 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:47.598967 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:47.606045 systemd-logind[1456]: New session 10 of user core. Feb 13 20:14:47.612973 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:14:48.368148 sshd[5943]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:48.375883 systemd[1]: sshd@9-78.47.136.246:22-147.75.109.163:38470.service: Deactivated successfully. Feb 13 20:14:48.382210 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:14:48.383291 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:14:48.384369 systemd-logind[1456]: Removed session 10. Feb 13 20:14:48.545861 systemd[1]: Started sshd@10-78.47.136.246:22-147.75.109.163:38474.service - OpenSSH per-connection server daemon (147.75.109.163:38474). Feb 13 20:14:49.521555 sshd[5960]: Accepted publickey for core from 147.75.109.163 port 38474 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:49.522245 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:49.530229 systemd-logind[1456]: New session 11 of user core. Feb 13 20:14:49.535895 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:14:50.321003 sshd[5960]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:50.325503 systemd[1]: sshd@10-78.47.136.246:22-147.75.109.163:38474.service: Deactivated successfully. Feb 13 20:14:50.328721 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:14:50.331503 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:14:50.334404 systemd-logind[1456]: Removed session 11. Feb 13 20:14:50.504114 systemd[1]: Started sshd@11-78.47.136.246:22-147.75.109.163:47038.service - OpenSSH per-connection server daemon (147.75.109.163:47038). Feb 13 20:14:51.495593 sshd[5972]: Accepted publickey for core from 147.75.109.163 port 47038 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:51.497802 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:51.503071 systemd-logind[1456]: New session 12 of user core. Feb 13 20:14:51.511823 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:14:52.254736 sshd[5972]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:52.260750 systemd[1]: sshd@11-78.47.136.246:22-147.75.109.163:47038.service: Deactivated successfully. Feb 13 20:14:52.263062 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:14:52.264629 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:14:52.265649 systemd-logind[1456]: Removed session 12. Feb 13 20:14:57.431921 systemd[1]: Started sshd@12-78.47.136.246:22-147.75.109.163:47052.service - OpenSSH per-connection server daemon (147.75.109.163:47052). Feb 13 20:14:58.408170 sshd[6008]: Accepted publickey for core from 147.75.109.163 port 47052 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:58.411201 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:58.418830 systemd-logind[1456]: New session 13 of user core. Feb 13 20:14:58.426942 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:14:59.185870 sshd[6008]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:59.191787 systemd[1]: sshd@12-78.47.136.246:22-147.75.109.163:47052.service: Deactivated successfully. Feb 13 20:14:59.196071 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:14:59.197698 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:14:59.200651 systemd-logind[1456]: Removed session 13. Feb 13 20:14:59.361096 systemd[1]: Started sshd@13-78.47.136.246:22-147.75.109.163:47064.service - OpenSSH per-connection server daemon (147.75.109.163:47064). Feb 13 20:15:00.343717 sshd[6021]: Accepted publickey for core from 147.75.109.163 port 47064 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:15:00.344829 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:00.352900 systemd-logind[1456]: New session 14 of user core. Feb 13 20:15:00.359932 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:15:01.222223 sshd[6021]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:01.227239 systemd[1]: sshd@13-78.47.136.246:22-147.75.109.163:47064.service: Deactivated successfully. Feb 13 20:15:01.229788 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:15:01.232546 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:15:01.234460 systemd-logind[1456]: Removed session 14. Feb 13 20:15:01.399020 systemd[1]: Started sshd@14-78.47.136.246:22-147.75.109.163:48936.service - OpenSSH per-connection server daemon (147.75.109.163:48936). Feb 13 20:15:02.389729 sshd[6052]: Accepted publickey for core from 147.75.109.163 port 48936 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:15:02.391837 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:02.399871 systemd-logind[1456]: New session 15 of user core. Feb 13 20:15:02.410432 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:15:03.911004 sshd[6052]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:03.916961 systemd[1]: sshd@14-78.47.136.246:22-147.75.109.163:48936.service: Deactivated successfully. Feb 13 20:15:03.920037 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:15:03.921943 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:15:03.923179 systemd-logind[1456]: Removed session 15. Feb 13 20:15:04.092212 systemd[1]: Started sshd@15-78.47.136.246:22-147.75.109.163:48944.service - OpenSSH per-connection server daemon (147.75.109.163:48944). Feb 13 20:15:05.083258 sshd[6089]: Accepted publickey for core from 147.75.109.163 port 48944 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:15:05.084037 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:05.092815 systemd-logind[1456]: New session 16 of user core. Feb 13 20:15:05.099766 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:15:05.987981 sshd[6089]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:05.994627 systemd[1]: sshd@15-78.47.136.246:22-147.75.109.163:48944.service: Deactivated successfully. Feb 13 20:15:05.998789 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:15:06.002828 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:15:06.005770 systemd-logind[1456]: Removed session 16. Feb 13 20:15:06.166119 systemd[1]: Started sshd@16-78.47.136.246:22-147.75.109.163:48954.service - OpenSSH per-connection server daemon (147.75.109.163:48954). Feb 13 20:15:07.145602 sshd[6100]: Accepted publickey for core from 147.75.109.163 port 48954 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:15:07.149006 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:07.156244 systemd-logind[1456]: New session 17 of user core. Feb 13 20:15:07.163326 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:15:07.902016 sshd[6100]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:07.907162 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:15:07.909599 systemd[1]: sshd@16-78.47.136.246:22-147.75.109.163:48954.service: Deactivated successfully. Feb 13 20:15:07.913235 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:15:07.915972 systemd-logind[1456]: Removed session 17. Feb 13 20:15:13.075335 systemd[1]: Started sshd@17-78.47.136.246:22-147.75.109.163:56328.service - OpenSSH per-connection server daemon (147.75.109.163:56328). Feb 13 20:15:14.066971 sshd[6119]: Accepted publickey for core from 147.75.109.163 port 56328 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:15:14.069457 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:14.076330 systemd-logind[1456]: New session 18 of user core. Feb 13 20:15:14.083153 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:15:14.816219 sshd[6119]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:14.822612 systemd[1]: sshd@17-78.47.136.246:22-147.75.109.163:56328.service: Deactivated successfully. Feb 13 20:15:14.825333 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:15:14.829671 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:15:14.831988 systemd-logind[1456]: Removed session 18. Feb 13 20:15:19.988332 systemd[1]: Started sshd@18-78.47.136.246:22-147.75.109.163:36106.service - OpenSSH per-connection server daemon (147.75.109.163:36106). Feb 13 20:15:20.964828 sshd[6146]: Accepted publickey for core from 147.75.109.163 port 36106 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:15:20.967224 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:20.973734 systemd-logind[1456]: New session 19 of user core. Feb 13 20:15:20.979838 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:15:21.735000 sshd[6146]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:21.741953 systemd[1]: sshd@18-78.47.136.246:22-147.75.109.163:36106.service: Deactivated successfully. Feb 13 20:15:21.746171 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:15:21.751992 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:15:21.753931 systemd-logind[1456]: Removed session 19.