Mar 17 17:36:11.919093 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:36:11.919118 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:36:11.919130 kernel: KASLR enabled Mar 17 17:36:11.919137 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 17 17:36:11.919143 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Mar 17 17:36:11.919150 kernel: random: crng init done Mar 17 17:36:11.919158 kernel: secureboot: Secure boot disabled Mar 17 17:36:11.919165 kernel: ACPI: Early table checksum verification disabled Mar 17 17:36:11.919172 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 17 17:36:11.919181 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:36:11.919188 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919195 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919202 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919209 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919217 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919226 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919233 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919241 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919248 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:36:11.919255 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:36:11.919263 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 17 17:36:11.919270 kernel: NUMA: Failed to initialise from firmware Mar 17 17:36:11.919277 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:36:11.919285 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Mar 17 17:36:11.919292 kernel: Zone ranges: Mar 17 17:36:11.919301 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:36:11.919308 kernel: DMA32 empty Mar 17 17:36:11.919315 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 17 17:36:11.919323 kernel: Movable zone start for each node Mar 17 17:36:11.919330 kernel: Early memory node ranges Mar 17 17:36:11.919337 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 17 17:36:11.919356 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 17 17:36:11.919365 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 17 17:36:11.919372 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 17 17:36:11.919380 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 17 17:36:11.919387 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 17 17:36:11.919394 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 17 17:36:11.919403 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:36:11.919411 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 17 17:36:11.919418 kernel: psci: probing for conduit method from ACPI. Mar 17 17:36:11.919428 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:36:11.919436 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:36:11.919444 kernel: psci: Trusted OS migration not required Mar 17 17:36:11.919453 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:36:11.919461 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:36:11.919469 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:36:11.919477 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:36:11.919485 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:36:11.919493 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:36:11.919501 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:36:11.919509 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:36:11.919517 kernel: CPU features: detected: Spectre-v4 Mar 17 17:36:11.919526 kernel: CPU features: detected: Spectre-BHB Mar 17 17:36:11.919538 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:36:11.919548 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:36:11.919557 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:36:11.919565 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:36:11.919573 kernel: alternatives: applying boot alternatives Mar 17 17:36:11.919583 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:36:11.919592 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:36:11.919599 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:36:11.919607 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:36:11.919615 kernel: Fallback order for Node 0: 0 Mar 17 17:36:11.919623 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 17 17:36:11.919633 kernel: Policy zone: Normal Mar 17 17:36:11.919641 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:36:11.919648 kernel: software IO TLB: area num 2. Mar 17 17:36:11.919656 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 17 17:36:11.919665 kernel: Memory: 3882612K/4096000K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 213388K reserved, 0K cma-reserved) Mar 17 17:36:11.919673 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:36:11.919681 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:36:11.919689 kernel: rcu: RCU event tracing is enabled. Mar 17 17:36:11.919697 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:36:11.919706 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:36:11.919714 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:36:11.919722 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:36:11.919731 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:36:11.919739 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:36:11.919746 kernel: GICv3: 256 SPIs implemented Mar 17 17:36:11.919755 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:36:11.919762 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:36:11.919770 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:36:11.919778 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:36:11.919786 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:36:11.919793 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:36:11.919801 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:36:11.919809 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 17 17:36:11.919819 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 17 17:36:11.919828 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:36:11.919835 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:36:11.919843 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:36:11.919851 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:36:11.919859 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:36:11.919866 kernel: Console: colour dummy device 80x25 Mar 17 17:36:11.919872 kernel: ACPI: Core revision 20230628 Mar 17 17:36:11.919879 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:36:11.919886 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:36:11.919894 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:36:11.919901 kernel: landlock: Up and running. Mar 17 17:36:11.919908 kernel: SELinux: Initializing. Mar 17 17:36:11.919915 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:36:11.922238 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:36:11.922247 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:36:11.922255 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:36:11.922262 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:36:11.922270 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:36:11.922277 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:36:11.922290 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:36:11.922297 kernel: Remapping and enabling EFI services. Mar 17 17:36:11.922304 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:36:11.922311 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:36:11.922318 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:36:11.922325 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 17 17:36:11.922332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:36:11.922339 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:36:11.922358 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:36:11.922368 kernel: SMP: Total of 2 processors activated. Mar 17 17:36:11.922375 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:36:11.922387 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:36:11.922396 kernel: CPU features: detected: Common not Private translations Mar 17 17:36:11.922404 kernel: CPU features: detected: CRC32 instructions Mar 17 17:36:11.922411 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:36:11.922418 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:36:11.922425 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:36:11.922432 kernel: CPU features: detected: Privileged Access Never Mar 17 17:36:11.922441 kernel: CPU features: detected: RAS Extension Support Mar 17 17:36:11.922448 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:36:11.922456 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:36:11.922463 kernel: alternatives: applying system-wide alternatives Mar 17 17:36:11.922470 kernel: devtmpfs: initialized Mar 17 17:36:11.922477 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:36:11.922485 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:36:11.922493 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:36:11.922501 kernel: SMBIOS 3.0.0 present. Mar 17 17:36:11.922508 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 17 17:36:11.922515 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:36:11.922522 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:36:11.922530 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:36:11.922537 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:36:11.922544 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:36:11.922551 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Mar 17 17:36:11.922560 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:36:11.922568 kernel: cpuidle: using governor menu Mar 17 17:36:11.922575 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:36:11.922582 kernel: ASID allocator initialised with 32768 entries Mar 17 17:36:11.922589 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:36:11.922596 kernel: Serial: AMBA PL011 UART driver Mar 17 17:36:11.922604 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:36:11.922611 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:36:11.922618 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:36:11.922625 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:36:11.922634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:36:11.922641 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:36:11.922648 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:36:11.922655 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:36:11.922662 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:36:11.922669 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:36:11.922677 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:36:11.922684 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:36:11.922691 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:36:11.922700 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:36:11.922707 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:36:11.922714 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:36:11.922721 kernel: ACPI: Interpreter enabled Mar 17 17:36:11.922728 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:36:11.922736 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:36:11.922743 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:36:11.922750 kernel: printk: console [ttyAMA0] enabled Mar 17 17:36:11.922757 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:36:11.922935 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:36:11.923020 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:36:11.923086 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:36:11.923148 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:36:11.923210 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:36:11.923219 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:36:11.923227 kernel: PCI host bridge to bus 0000:00 Mar 17 17:36:11.923299 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:36:11.923373 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:36:11.923436 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:36:11.923493 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:36:11.923571 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:36:11.923645 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 17 17:36:11.923715 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 17 17:36:11.923782 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:36:11.923877 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.923958 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 17 17:36:11.924033 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.924098 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 17 17:36:11.924174 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.924238 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 17 17:36:11.924312 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.924423 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 17 17:36:11.924504 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.924569 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 17 17:36:11.924644 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.924707 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 17 17:36:11.924776 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.924840 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 17 17:36:11.924909 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.925008 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 17 17:36:11.925082 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:36:11.925151 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 17 17:36:11.925254 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 17 17:36:11.925322 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 17 17:36:11.925416 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:36:11.925488 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 17 17:36:11.925557 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:36:11.925626 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:36:11.925703 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 17:36:11.925771 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 17 17:36:11.925850 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 17:36:11.926022 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 17 17:36:11.926108 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 17 17:36:11.926183 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 17:36:11.926252 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 17 17:36:11.926325 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 17:36:11.926437 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 17 17:36:11.926507 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 17 17:36:11.926580 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 17:36:11.926646 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 17 17:36:11.926714 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:36:11.926787 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:36:11.926852 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 17 17:36:11.926932 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 17 17:36:11.927038 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:36:11.927110 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 17 17:36:11.927179 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:36:11.927242 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:36:11.927306 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 17 17:36:11.927384 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 17 17:36:11.927450 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 17 17:36:11.927516 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 17:36:11.927579 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:36:11.927643 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:36:11.927725 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 17:36:11.927789 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 17 17:36:11.927854 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 17 17:36:11.927963 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 17:36:11.928034 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:36:11.928096 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:36:11.928161 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 17:36:11.928225 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:36:11.928286 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:36:11.928367 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 17:36:11.928437 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:36:11.928501 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:36:11.928566 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 17:36:11.928628 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:36:11.928693 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:36:11.928759 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 17:36:11.928822 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:36:11.928888 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:36:11.928983 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 17 17:36:11.929065 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:36:11.929131 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 17 17:36:11.929194 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:36:11.929262 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 17 17:36:11.929327 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:36:11.929433 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 17 17:36:11.929503 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:36:11.929572 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 17 17:36:11.929641 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:36:11.929711 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 17 17:36:11.929780 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:36:11.929850 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 17 17:36:11.929913 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:36:11.930002 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 17 17:36:11.930069 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:36:11.930134 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 17 17:36:11.930201 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:36:11.930268 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 17 17:36:11.930332 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 17 17:36:11.930412 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 17 17:36:11.930478 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 17:36:11.930543 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 17 17:36:11.930608 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 17:36:11.930671 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 17 17:36:11.930737 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 17:36:11.930800 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 17 17:36:11.930863 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 17:36:11.931040 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 17 17:36:11.931115 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 17:36:11.931178 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 17 17:36:11.931239 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 17:36:11.931301 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 17 17:36:11.931412 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 17:36:11.931481 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 17 17:36:11.931542 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 17:36:11.931605 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 17 17:36:11.931667 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 17 17:36:11.931733 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 17 17:36:11.931803 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 17 17:36:11.931867 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:36:11.931950 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 17 17:36:11.932016 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:36:11.932078 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 17:36:11.932139 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 17 17:36:11.932199 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:36:11.932269 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 17 17:36:11.932336 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:36:11.932417 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 17:36:11.932481 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 17 17:36:11.932544 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:36:11.932613 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:36:11.932680 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 17 17:36:11.932747 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:36:11.932809 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 17:36:11.932874 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 17 17:36:11.932950 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:36:11.933026 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:36:11.933090 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:36:11.933152 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 17:36:11.933214 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 17 17:36:11.933280 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:36:11.933362 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 17 17:36:11.933433 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 17 17:36:11.933497 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:36:11.933560 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 17:36:11.933622 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 17 17:36:11.933684 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:36:11.933754 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 17 17:36:11.933823 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 17 17:36:11.933887 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:36:11.934004 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 17:36:11.934071 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 17 17:36:11.934133 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:36:11.934201 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 17 17:36:11.934265 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 17 17:36:11.934329 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 17 17:36:11.934438 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:36:11.934506 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 17:36:11.934568 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 17 17:36:11.934631 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:36:11.934694 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:36:11.934755 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 17:36:11.934816 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 17 17:36:11.934878 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:36:11.934963 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:36:11.935031 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 17 17:36:11.935094 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 17 17:36:11.935156 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:36:11.935220 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:36:11.935277 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:36:11.935334 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:36:11.935430 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 17:36:11.935492 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 17 17:36:11.935549 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:36:11.935615 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 17 17:36:11.935674 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 17 17:36:11.935732 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:36:11.935797 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 17 17:36:11.935861 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 17 17:36:11.935953 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:36:11.936028 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 17 17:36:11.936088 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 17 17:36:11.936148 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:36:11.936215 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 17 17:36:11.936279 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 17 17:36:11.936338 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:36:11.936425 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 17 17:36:11.936503 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 17 17:36:11.936567 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:36:11.936635 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 17 17:36:11.936695 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 17 17:36:11.936754 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:36:11.936824 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 17 17:36:11.936884 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 17 17:36:11.939074 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:36:11.939179 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 17 17:36:11.939240 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 17 17:36:11.939300 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:36:11.939310 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:36:11.939318 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:36:11.939326 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:36:11.939333 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:36:11.939341 kernel: iommu: Default domain type: Translated Mar 17 17:36:11.939392 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:36:11.939400 kernel: efivars: Registered efivars operations Mar 17 17:36:11.939408 kernel: vgaarb: loaded Mar 17 17:36:11.939415 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:36:11.939423 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:36:11.939431 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:36:11.939439 kernel: pnp: PnP ACPI init Mar 17 17:36:11.939525 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:36:11.939540 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:36:11.939548 kernel: NET: Registered PF_INET protocol family Mar 17 17:36:11.939557 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:36:11.939565 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:36:11.939573 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:36:11.939581 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:36:11.939589 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:36:11.939596 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:36:11.939604 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:36:11.939614 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:36:11.939621 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:36:11.939697 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 17 17:36:11.939708 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:36:11.939716 kernel: kvm [1]: HYP mode not available Mar 17 17:36:11.939723 kernel: Initialise system trusted keyrings Mar 17 17:36:11.939731 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:36:11.939739 kernel: Key type asymmetric registered Mar 17 17:36:11.939746 kernel: Asymmetric key parser 'x509' registered Mar 17 17:36:11.939755 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:36:11.939763 kernel: io scheduler mq-deadline registered Mar 17 17:36:11.939771 kernel: io scheduler kyber registered Mar 17 17:36:11.939778 kernel: io scheduler bfq registered Mar 17 17:36:11.939786 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:36:11.939854 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 17 17:36:11.940402 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 17 17:36:11.940516 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.940592 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 17 17:36:11.940657 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 17 17:36:11.940721 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.940788 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 17 17:36:11.940854 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 17 17:36:11.940974 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.941058 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 17 17:36:11.941123 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 17 17:36:11.941189 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.941257 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 17 17:36:11.941321 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 17 17:36:11.941407 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.941484 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 17 17:36:11.941549 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 17 17:36:11.941612 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.941678 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 17 17:36:11.941741 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 17 17:36:11.941806 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.941876 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 17 17:36:11.941957 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 17 17:36:11.942026 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.942037 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 17 17:36:11.942102 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 17 17:36:11.942165 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 17 17:36:11.942231 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:36:11.942241 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:36:11.942249 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:36:11.942257 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:36:11.942326 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 17 17:36:11.942441 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 17 17:36:11.942455 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:36:11.942463 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:36:11.942532 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 17 17:36:11.942547 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 17 17:36:11.942555 kernel: thunder_xcv, ver 1.0 Mar 17 17:36:11.942562 kernel: thunder_bgx, ver 1.0 Mar 17 17:36:11.942570 kernel: nicpf, ver 1.0 Mar 17 17:36:11.942578 kernel: nicvf, ver 1.0 Mar 17 17:36:11.942656 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:36:11.942717 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:36:11 UTC (1742232971) Mar 17 17:36:11.942727 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:36:11.942736 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:36:11.942744 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:36:11.942753 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:36:11.942761 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:36:11.942768 kernel: Segment Routing with IPv6 Mar 17 17:36:11.942776 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:36:11.942784 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:36:11.942791 kernel: Key type dns_resolver registered Mar 17 17:36:11.942799 kernel: registered taskstats version 1 Mar 17 17:36:11.942808 kernel: Loading compiled-in X.509 certificates Mar 17 17:36:11.942815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:36:11.942823 kernel: Key type .fscrypt registered Mar 17 17:36:11.942830 kernel: Key type fscrypt-provisioning registered Mar 17 17:36:11.942838 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:36:11.942845 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:36:11.942853 kernel: ima: No architecture policies found Mar 17 17:36:11.942860 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:36:11.942870 kernel: clk: Disabling unused clocks Mar 17 17:36:11.942877 kernel: Freeing unused kernel memory: 39744K Mar 17 17:36:11.942885 kernel: Run /init as init process Mar 17 17:36:11.942893 kernel: with arguments: Mar 17 17:36:11.942901 kernel: /init Mar 17 17:36:11.942909 kernel: with environment: Mar 17 17:36:11.943295 kernel: HOME=/ Mar 17 17:36:11.943564 kernel: TERM=linux Mar 17 17:36:11.943573 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:36:11.943583 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:36:11.943599 systemd[1]: Detected virtualization kvm. Mar 17 17:36:11.943608 systemd[1]: Detected architecture arm64. Mar 17 17:36:11.943616 systemd[1]: Running in initrd. Mar 17 17:36:11.943623 systemd[1]: No hostname configured, using default hostname. Mar 17 17:36:11.943631 systemd[1]: Hostname set to . Mar 17 17:36:11.943640 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:36:11.943650 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:36:11.943658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:36:11.943666 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:36:11.943676 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:36:11.943684 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:36:11.943692 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:36:11.943701 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:36:11.943710 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:36:11.943721 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:36:11.943729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:36:11.943737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:36:11.943745 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:36:11.943753 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:36:11.943761 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:36:11.943769 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:36:11.943777 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:36:11.943787 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:36:11.943795 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:36:11.943803 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:36:11.943812 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:36:11.943820 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:36:11.943828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:36:11.943836 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:36:11.943844 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:36:11.943854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:36:11.943862 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:36:11.943870 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:36:11.943878 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:36:11.943886 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:36:11.943894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:36:11.943902 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:36:11.943910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:36:11.944004 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:36:11.944017 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:36:11.944026 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:36:11.944066 systemd-journald[237]: Collecting audit messages is disabled. Mar 17 17:36:11.944089 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:36:11.944097 kernel: Bridge firewalling registered Mar 17 17:36:11.944106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:36:11.944115 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:36:11.944123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:36:11.944133 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:36:11.944141 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:36:11.944149 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:36:11.944159 systemd-journald[237]: Journal started Mar 17 17:36:11.944178 systemd-journald[237]: Runtime Journal (/run/log/journal/d0ab69851ebf4058aa9cb4219af9d143) is 8.0M, max 76.6M, 68.6M free. Mar 17 17:36:11.901044 systemd-modules-load[238]: Inserted module 'overlay' Mar 17 17:36:11.946054 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:36:11.915280 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 17 17:36:11.957728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:36:11.958993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:36:11.962914 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:36:11.974217 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:36:11.977741 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:36:11.987140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:36:11.991509 dracut-cmdline[272]: dracut-dracut-053 Mar 17 17:36:11.995604 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:36:12.013971 systemd-resolved[277]: Positive Trust Anchors: Mar 17 17:36:12.014040 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:36:12.014073 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:36:12.025486 systemd-resolved[277]: Defaulting to hostname 'linux'. Mar 17 17:36:12.026653 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:36:12.028811 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:36:12.080951 kernel: SCSI subsystem initialized Mar 17 17:36:12.086026 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:36:12.095940 kernel: iscsi: registered transport (tcp) Mar 17 17:36:12.111941 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:36:12.112030 kernel: QLogic iSCSI HBA Driver Mar 17 17:36:12.163336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:36:12.173205 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:36:12.192018 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:36:12.192082 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:36:12.192940 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:36:12.248964 kernel: raid6: neonx8 gen() 15689 MB/s Mar 17 17:36:12.265992 kernel: raid6: neonx4 gen() 15577 MB/s Mar 17 17:36:12.282977 kernel: raid6: neonx2 gen() 13167 MB/s Mar 17 17:36:12.299995 kernel: raid6: neonx1 gen() 10444 MB/s Mar 17 17:36:12.316964 kernel: raid6: int64x8 gen() 6933 MB/s Mar 17 17:36:12.333990 kernel: raid6: int64x4 gen() 7318 MB/s Mar 17 17:36:12.351068 kernel: raid6: int64x2 gen() 6104 MB/s Mar 17 17:36:12.367972 kernel: raid6: int64x1 gen() 5033 MB/s Mar 17 17:36:12.368073 kernel: raid6: using algorithm neonx8 gen() 15689 MB/s Mar 17 17:36:12.384998 kernel: raid6: .... xor() 11870 MB/s, rmw enabled Mar 17 17:36:12.385123 kernel: raid6: using neon recovery algorithm Mar 17 17:36:12.390296 kernel: xor: measuring software checksum speed Mar 17 17:36:12.390399 kernel: 8regs : 18868 MB/sec Mar 17 17:36:12.390432 kernel: 32regs : 19627 MB/sec Mar 17 17:36:12.390458 kernel: arm64_neon : 26972 MB/sec Mar 17 17:36:12.391283 kernel: xor: using function: arm64_neon (26972 MB/sec) Mar 17 17:36:12.442120 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:36:12.456730 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:36:12.465179 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:36:12.478375 systemd-udevd[456]: Using default interface naming scheme 'v255'. Mar 17 17:36:12.483513 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:36:12.491556 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:36:12.512035 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Mar 17 17:36:12.545027 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:36:12.551168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:36:12.607413 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:36:12.615280 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:36:12.635823 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:36:12.637115 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:36:12.639089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:36:12.640434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:36:12.648106 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:36:12.673148 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:36:12.713162 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:36:12.718572 kernel: ACPI: bus type USB registered Mar 17 17:36:12.718626 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:36:12.718661 kernel: usbcore: registered new interface driver usbfs Mar 17 17:36:12.718671 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 17:36:12.720975 kernel: usbcore: registered new interface driver hub Mar 17 17:36:12.721035 kernel: usbcore: registered new device driver usb Mar 17 17:36:12.741860 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:36:12.742024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:36:12.744626 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:36:12.745480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:36:12.745701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:36:12.747332 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:36:12.755907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:36:12.781261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:36:12.789075 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:36:12.793222 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:36:12.806082 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 17:36:12.806193 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 17:36:12.806271 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:36:12.806464 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 17:36:12.806771 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 17 17:36:12.808237 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 17:36:12.808853 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 17 17:36:12.809972 kernel: hub 1-0:1.0: USB hub found Mar 17 17:36:12.810292 kernel: hub 1-0:1.0: 4 ports detected Mar 17 17:36:12.810623 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:36:12.810637 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 17:36:12.810806 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:36:12.810912 kernel: hub 2-0:1.0: USB hub found Mar 17 17:36:12.814286 kernel: hub 2-0:1.0: 4 ports detected Mar 17 17:36:12.814464 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 17 17:36:12.824522 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 17:36:12.824637 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 17 17:36:12.824715 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 17 17:36:12.824793 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:36:12.824869 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:36:12.824888 kernel: GPT:17805311 != 80003071 Mar 17 17:36:12.824897 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:36:12.824906 kernel: GPT:17805311 != 80003071 Mar 17 17:36:12.824915 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:36:12.824942 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:36:12.824953 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 17 17:36:12.834714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:36:12.869937 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (520) Mar 17 17:36:12.870938 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (506) Mar 17 17:36:12.871549 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 17:36:12.884719 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 17:36:12.893255 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:36:12.897858 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 17:36:12.899425 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 17:36:12.906070 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:36:12.916654 disk-uuid[572]: Primary Header is updated. Mar 17 17:36:12.916654 disk-uuid[572]: Secondary Entries is updated. Mar 17 17:36:12.916654 disk-uuid[572]: Secondary Header is updated. Mar 17 17:36:12.928946 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:36:13.041964 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 17:36:13.284045 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 17 17:36:13.419381 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 17 17:36:13.419443 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 17:36:13.420798 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 17 17:36:13.476297 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 17 17:36:13.476694 kernel: usbcore: registered new interface driver usbhid Mar 17 17:36:13.476725 kernel: usbhid: USB HID core driver Mar 17 17:36:13.938959 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:36:13.940215 disk-uuid[573]: The operation has completed successfully. Mar 17 17:36:13.991823 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:36:13.991971 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:36:14.002117 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:36:14.023537 sh[588]: Success Mar 17 17:36:14.039321 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:36:14.081230 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:36:14.089067 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:36:14.090622 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:36:14.109167 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:36:14.109244 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:36:14.109269 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:36:14.109975 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:36:14.110027 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:36:14.116978 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:36:14.119037 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:36:14.120605 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:36:14.127160 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:36:14.132113 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:36:14.147194 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:36:14.147249 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:36:14.147260 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:36:14.153640 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:36:14.153739 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:36:14.167403 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:36:14.167095 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:36:14.174014 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:36:14.179293 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:36:14.272476 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:36:14.276100 ignition[690]: Ignition 2.20.0 Mar 17 17:36:14.276639 ignition[690]: Stage: fetch-offline Mar 17 17:36:14.276682 ignition[690]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:36:14.276690 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:36:14.277290 ignition[690]: parsed url from cmdline: "" Mar 17 17:36:14.277294 ignition[690]: no config URL provided Mar 17 17:36:14.277301 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:36:14.277313 ignition[690]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:36:14.277318 ignition[690]: failed to fetch config: resource requires networking Mar 17 17:36:14.277632 ignition[690]: Ignition finished successfully Mar 17 17:36:14.283746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:36:14.285610 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:36:14.302954 systemd-networkd[775]: lo: Link UP Mar 17 17:36:14.302967 systemd-networkd[775]: lo: Gained carrier Mar 17 17:36:14.304744 systemd-networkd[775]: Enumeration completed Mar 17 17:36:14.304972 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:36:14.305683 systemd[1]: Reached target network.target - Network. Mar 17 17:36:14.308072 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:14.308080 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:36:14.309063 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:14.309067 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:36:14.309651 systemd-networkd[775]: eth0: Link UP Mar 17 17:36:14.309654 systemd-networkd[775]: eth0: Gained carrier Mar 17 17:36:14.309661 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:14.312144 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:36:14.317078 systemd-networkd[775]: eth1: Link UP Mar 17 17:36:14.317081 systemd-networkd[775]: eth1: Gained carrier Mar 17 17:36:14.317089 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:14.325502 ignition[779]: Ignition 2.20.0 Mar 17 17:36:14.325512 ignition[779]: Stage: fetch Mar 17 17:36:14.325676 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:36:14.325685 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:36:14.325778 ignition[779]: parsed url from cmdline: "" Mar 17 17:36:14.325782 ignition[779]: no config URL provided Mar 17 17:36:14.325787 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:36:14.325796 ignition[779]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:36:14.325879 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 17:36:14.326640 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 17:36:14.348009 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:36:14.369019 systemd-networkd[775]: eth0: DHCPv4 address 138.201.116.42/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:36:14.526819 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 17:36:14.531936 ignition[779]: GET result: OK Mar 17 17:36:14.532095 ignition[779]: parsing config with SHA512: e23e098ab72ff9a563842d9c792d204512170091ddcb6a1d235bc08c3a76e12943a068c086769231a7d447440681e20c9587d4b69c0a196fbf3be13368897a2c Mar 17 17:36:14.539263 unknown[779]: fetched base config from "system" Mar 17 17:36:14.539280 unknown[779]: fetched base config from "system" Mar 17 17:36:14.539286 unknown[779]: fetched user config from "hetzner" Mar 17 17:36:14.542165 ignition[779]: fetch: fetch complete Mar 17 17:36:14.542172 ignition[779]: fetch: fetch passed Mar 17 17:36:14.542229 ignition[779]: Ignition finished successfully Mar 17 17:36:14.544459 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:36:14.553300 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:36:14.569739 ignition[786]: Ignition 2.20.0 Mar 17 17:36:14.569751 ignition[786]: Stage: kargs Mar 17 17:36:14.569945 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:36:14.569955 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:36:14.573049 ignition[786]: kargs: kargs passed Mar 17 17:36:14.573518 ignition[786]: Ignition finished successfully Mar 17 17:36:14.575679 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:36:14.584226 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:36:14.596600 ignition[793]: Ignition 2.20.0 Mar 17 17:36:14.596611 ignition[793]: Stage: disks Mar 17 17:36:14.596798 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:36:14.596808 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:36:14.598011 ignition[793]: disks: disks passed Mar 17 17:36:14.598070 ignition[793]: Ignition finished successfully Mar 17 17:36:14.601353 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:36:14.602142 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:36:14.603351 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:36:14.604764 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:36:14.606145 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:36:14.607129 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:36:14.612099 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:36:14.633821 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:36:14.640760 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:36:14.647023 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:36:14.689954 kernel: EXT4-fs (sda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:36:14.690714 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:36:14.691813 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:36:14.702110 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:36:14.706062 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:36:14.707870 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:36:14.710656 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:36:14.710697 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:36:14.716298 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:36:14.722235 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (810) Mar 17 17:36:14.722274 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:36:14.722285 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:36:14.724728 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:36:14.724815 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:36:14.733725 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:36:14.733778 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:36:14.736772 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:36:14.779006 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:36:14.786765 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:36:14.790789 coreos-metadata[812]: Mar 17 17:36:14.790 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 17:36:14.794040 coreos-metadata[812]: Mar 17 17:36:14.793 INFO Fetch successful Mar 17 17:36:14.794040 coreos-metadata[812]: Mar 17 17:36:14.793 INFO wrote hostname ci-4152-2-2-4-e17a7af1b1 to /sysroot/etc/hostname Mar 17 17:36:14.797859 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:36:14.797880 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:36:14.803011 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:36:14.901713 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:36:14.909111 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:36:14.911885 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:36:14.922958 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:36:14.948034 ignition[927]: INFO : Ignition 2.20.0 Mar 17 17:36:14.949757 ignition[927]: INFO : Stage: mount Mar 17 17:36:14.950544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:36:14.953386 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:36:14.953386 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:36:14.953386 ignition[927]: INFO : mount: mount passed Mar 17 17:36:14.953386 ignition[927]: INFO : Ignition finished successfully Mar 17 17:36:14.954015 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:36:14.962047 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:36:15.108281 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:36:15.114216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:36:15.123957 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (939) Mar 17 17:36:15.125974 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:36:15.126104 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:36:15.126126 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:36:15.128998 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:36:15.129041 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:36:15.133014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:36:15.157726 ignition[956]: INFO : Ignition 2.20.0 Mar 17 17:36:15.157726 ignition[956]: INFO : Stage: files Mar 17 17:36:15.158782 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:36:15.158782 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:36:15.160367 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:36:15.161292 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:36:15.161292 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:36:15.164012 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:36:15.165239 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:36:15.165239 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:36:15.164466 unknown[956]: wrote ssh authorized keys file for user: core Mar 17 17:36:15.167962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:36:15.167962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:36:15.167962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:36:15.167962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:36:15.275548 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:36:15.606162 systemd-networkd[775]: eth1: Gained IPv6LL Mar 17 17:36:16.054289 systemd-networkd[775]: eth0: Gained IPv6LL Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:36:16.469959 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:36:17.064731 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:36:17.475580 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:36:17.475580 ignition[956]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:36:17.479086 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:36:17.479086 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:36:17.479086 ignition[956]: INFO : files: files passed Mar 17 17:36:17.479086 ignition[956]: INFO : Ignition finished successfully Mar 17 17:36:17.479268 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:36:17.489122 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:36:17.500703 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:36:17.503575 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:36:17.504216 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:36:17.513833 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:36:17.513833 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:36:17.516995 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:36:17.520016 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:36:17.521397 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:36:17.530141 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:36:17.562208 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:36:17.562560 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:36:17.563804 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:36:17.564972 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:36:17.567435 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:36:17.573159 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:36:17.587764 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:36:17.593190 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:36:17.606406 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:36:17.607893 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:36:17.609404 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:36:17.610027 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:36:17.610150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:36:17.611656 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:36:17.613569 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:36:17.615044 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:36:17.616700 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:36:17.618575 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:36:17.619477 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:36:17.620518 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:36:17.621604 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:36:17.622691 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:36:17.623684 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:36:17.624502 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:36:17.624674 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:36:17.625959 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:36:17.627080 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:36:17.628181 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:36:17.629271 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:36:17.630714 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:36:17.630886 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:36:17.632514 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:36:17.632697 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:36:17.634119 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:36:17.634264 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:36:17.635309 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:36:17.635476 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:36:17.647951 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:36:17.649212 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:36:17.649610 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:36:17.655173 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:36:17.660584 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:36:17.660798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:36:17.661703 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:36:17.661803 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:36:17.668727 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:36:17.668869 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:36:17.676373 ignition[1008]: INFO : Ignition 2.20.0 Mar 17 17:36:17.679134 ignition[1008]: INFO : Stage: umount Mar 17 17:36:17.679134 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:36:17.679134 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:36:17.679134 ignition[1008]: INFO : umount: umount passed Mar 17 17:36:17.679134 ignition[1008]: INFO : Ignition finished successfully Mar 17 17:36:17.680932 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:36:17.681759 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:36:17.682951 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:36:17.685542 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:36:17.685646 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:36:17.686726 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:36:17.686774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:36:17.691203 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:36:17.691250 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:36:17.693883 systemd[1]: Stopped target network.target - Network. Mar 17 17:36:17.697762 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:36:17.697842 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:36:17.703234 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:36:17.704528 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:36:17.707983 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:36:17.708978 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:36:17.711833 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:36:17.714152 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:36:17.714216 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:36:17.716090 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:36:17.716173 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:36:17.718480 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:36:17.718611 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:36:17.719353 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:36:17.719397 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:36:17.720617 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:36:17.721402 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:36:17.725415 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:36:17.725581 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:36:17.726476 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:36:17.726537 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:36:17.727197 systemd-networkd[775]: eth1: DHCPv6 lease lost Mar 17 17:36:17.730657 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:36:17.730803 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:36:17.731026 systemd-networkd[775]: eth0: DHCPv6 lease lost Mar 17 17:36:17.736270 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:36:17.736516 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:36:17.738798 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:36:17.738915 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:36:17.745123 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:36:17.745681 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:36:17.745749 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:36:17.748758 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:36:17.748820 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:36:17.749855 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:36:17.749910 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:36:17.751803 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:36:17.751879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:36:17.756429 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:36:17.771073 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:36:17.771266 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:36:17.776741 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:36:17.776959 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:36:17.779431 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:36:17.779479 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:36:17.781691 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:36:17.781733 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:36:17.782913 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:36:17.782981 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:36:17.784617 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:36:17.784671 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:36:17.786131 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:36:17.786179 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:36:17.792246 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:36:17.792888 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:36:17.792967 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:36:17.795184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:36:17.795228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:36:17.806259 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:36:17.806470 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:36:17.808696 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:36:17.815168 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:36:17.828894 systemd[1]: Switching root. Mar 17 17:36:17.871205 systemd-journald[237]: Journal stopped Mar 17 17:36:18.809061 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 17 17:36:18.809132 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:36:18.809148 kernel: SELinux: policy capability open_perms=1 Mar 17 17:36:18.809158 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:36:18.809167 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:36:18.809177 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:36:18.809187 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:36:18.809196 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:36:18.809208 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:36:18.809218 kernel: audit: type=1403 audit(1742232978.036:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:36:18.809230 systemd[1]: Successfully loaded SELinux policy in 40.531ms. Mar 17 17:36:18.809247 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.320ms. Mar 17 17:36:18.810688 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:36:18.810711 systemd[1]: Detected virtualization kvm. Mar 17 17:36:18.810721 systemd[1]: Detected architecture arm64. Mar 17 17:36:18.810731 systemd[1]: Detected first boot. Mar 17 17:36:18.810741 systemd[1]: Hostname set to . Mar 17 17:36:18.810751 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:36:18.810766 zram_generator::config[1069]: No configuration found. Mar 17 17:36:18.810778 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:36:18.810788 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:36:18.810798 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:36:18.810809 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:36:18.810822 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:36:18.810832 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:36:18.810843 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:36:18.810855 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:36:18.810866 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:36:18.810876 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:36:18.810886 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:36:18.810896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:36:18.810906 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:36:18.811361 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:36:18.811407 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:36:18.811420 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:36:18.811436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:36:18.811447 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:36:18.811458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:36:18.811468 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:36:18.811479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:36:18.811490 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:36:18.811500 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:36:18.811512 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:36:18.811523 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:36:18.811534 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:36:18.811544 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:36:18.811555 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:36:18.811564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:36:18.811575 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:36:18.811585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:36:18.811595 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:36:18.811610 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:36:18.811621 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:36:18.811631 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:36:18.811642 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:36:18.811652 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:36:18.811662 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:36:18.811676 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:36:18.811692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:36:18.811703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:36:18.811714 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:36:18.811724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:36:18.811734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:36:18.811748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:36:18.811761 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:36:18.811776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:36:18.811788 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:36:18.811798 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 17:36:18.811810 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 17 17:36:18.811821 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:36:18.811832 kernel: ACPI: bus type drm_connector registered Mar 17 17:36:18.811843 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:36:18.811853 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:36:18.811865 kernel: fuse: init (API version 7.39) Mar 17 17:36:18.811875 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:36:18.811884 kernel: loop: module loaded Mar 17 17:36:18.811894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:36:18.811904 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:36:18.812078 systemd-journald[1153]: Collecting audit messages is disabled. Mar 17 17:36:18.812116 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:36:18.812128 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:36:18.812141 systemd-journald[1153]: Journal started Mar 17 17:36:18.812163 systemd-journald[1153]: Runtime Journal (/run/log/journal/d0ab69851ebf4058aa9cb4219af9d143) is 8.0M, max 76.6M, 68.6M free. Mar 17 17:36:18.817875 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:36:18.820105 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:36:18.820912 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:36:18.822847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:36:18.823879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:36:18.824940 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:36:18.825110 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:36:18.826208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:36:18.826513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:36:18.827793 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:36:18.829184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:36:18.830334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:36:18.830490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:36:18.831538 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:36:18.831693 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:36:18.832603 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:36:18.832765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:36:18.834654 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:36:18.837422 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:36:18.838511 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:36:18.847115 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:36:18.858309 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:36:18.865172 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:36:18.870048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:36:18.871199 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:36:18.883450 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:36:18.892901 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:36:18.893708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:36:18.899107 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:36:18.899855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:36:18.907070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:36:18.909953 systemd-journald[1153]: Time spent on flushing to /var/log/journal/d0ab69851ebf4058aa9cb4219af9d143 is 20.681ms for 1111 entries. Mar 17 17:36:18.909953 systemd-journald[1153]: System Journal (/var/log/journal/d0ab69851ebf4058aa9cb4219af9d143) is 8.0M, max 584.8M, 576.8M free. Mar 17 17:36:18.939676 systemd-journald[1153]: Received client request to flush runtime journal. Mar 17 17:36:18.923097 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:36:18.929542 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:36:18.931368 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:36:18.933207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:36:18.935374 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:36:18.941721 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:36:18.952157 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:36:18.956265 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:36:18.977717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:36:18.980714 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:36:18.985530 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Mar 17 17:36:18.985544 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Mar 17 17:36:18.995306 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:36:19.004129 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:36:19.033823 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:36:19.043219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:36:19.057283 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Mar 17 17:36:19.057300 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Mar 17 17:36:19.064531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:36:19.447531 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:36:19.453277 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:36:19.477892 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Mar 17 17:36:19.506530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:36:19.521217 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:36:19.548373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:36:19.563084 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 17 17:36:19.605198 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:36:19.687875 systemd-networkd[1244]: lo: Link UP Mar 17 17:36:19.687884 systemd-networkd[1244]: lo: Gained carrier Mar 17 17:36:19.689698 systemd-networkd[1244]: Enumeration completed Mar 17 17:36:19.689836 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:36:19.690609 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:19.690612 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:36:19.691525 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:19.691539 systemd-networkd[1244]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:36:19.692068 systemd-networkd[1244]: eth0: Link UP Mar 17 17:36:19.692072 systemd-networkd[1244]: eth0: Gained carrier Mar 17 17:36:19.692086 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:19.698196 systemd-networkd[1244]: eth1: Link UP Mar 17 17:36:19.698213 systemd-networkd[1244]: eth1: Gained carrier Mar 17 17:36:19.698229 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:19.704799 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:36:19.728094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:36:19.735174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:36:19.736012 systemd-networkd[1244]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:36:19.740078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:36:19.749107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:36:19.749732 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:36:19.749772 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:36:19.750133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:36:19.750290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:36:19.757166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:36:19.757484 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:36:19.760101 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:36:19.761347 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:36:19.764973 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:36:19.766233 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:36:19.768382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:36:19.769805 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:36:19.770009 systemd-networkd[1244]: eth0: DHCPv4 address 138.201.116.42/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:36:19.803064 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 17 17:36:19.803133 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:36:19.803146 kernel: [drm] features: -context_init Mar 17 17:36:19.804213 kernel: [drm] number of scanouts: 1 Mar 17 17:36:19.806354 kernel: [drm] number of cap sets: 0 Mar 17 17:36:19.806414 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 17:36:19.820818 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:36:19.835184 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:36:19.834198 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:36:19.854295 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1235) Mar 17 17:36:19.855431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:36:19.855689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:36:19.866214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:36:19.902719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:36:19.945091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:36:19.982822 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:36:19.995179 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:36:20.008042 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:36:20.032702 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:36:20.035823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:36:20.048200 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:36:20.053649 lvm[1310]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:36:20.077692 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:36:20.080977 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:36:20.082230 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:36:20.082494 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:36:20.083495 systemd[1]: Reached target machines.target - Containers. Mar 17 17:36:20.085243 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:36:20.091180 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:36:20.095836 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:36:20.099030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:36:20.106257 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:36:20.113152 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:36:20.119088 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:36:20.124158 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:36:20.141192 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:36:20.150697 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:36:20.154351 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:36:20.160986 kernel: loop0: detected capacity change from 0 to 113536 Mar 17 17:36:20.175948 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:36:20.196946 kernel: loop1: detected capacity change from 0 to 116808 Mar 17 17:36:20.228967 kernel: loop2: detected capacity change from 0 to 194096 Mar 17 17:36:20.272482 kernel: loop3: detected capacity change from 0 to 8 Mar 17 17:36:20.286969 kernel: loop4: detected capacity change from 0 to 113536 Mar 17 17:36:20.297950 kernel: loop5: detected capacity change from 0 to 116808 Mar 17 17:36:20.315965 kernel: loop6: detected capacity change from 0 to 194096 Mar 17 17:36:20.341995 kernel: loop7: detected capacity change from 0 to 8 Mar 17 17:36:20.341872 (sd-merge)[1331]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 17:36:20.343029 (sd-merge)[1331]: Merged extensions into '/usr'. Mar 17 17:36:20.347245 systemd[1]: Reloading requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:36:20.347262 systemd[1]: Reloading... Mar 17 17:36:20.433959 zram_generator::config[1364]: No configuration found. Mar 17 17:36:20.512973 ldconfig[1314]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:36:20.556912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:36:20.614837 systemd[1]: Reloading finished in 267 ms. Mar 17 17:36:20.635292 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:36:20.637823 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:36:20.648122 systemd[1]: Starting ensure-sysext.service... Mar 17 17:36:20.651193 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:36:20.674271 systemd[1]: Reloading requested from client PID 1404 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:36:20.674343 systemd[1]: Reloading... Mar 17 17:36:20.694494 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:36:20.695173 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:36:20.696017 systemd-tmpfiles[1405]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:36:20.696356 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Mar 17 17:36:20.696475 systemd-tmpfiles[1405]: ACLs are not supported, ignoring. Mar 17 17:36:20.699093 systemd-tmpfiles[1405]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:36:20.699423 systemd-tmpfiles[1405]: Skipping /boot Mar 17 17:36:20.708629 systemd-tmpfiles[1405]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:36:20.708745 systemd-tmpfiles[1405]: Skipping /boot Mar 17 17:36:20.755021 zram_generator::config[1440]: No configuration found. Mar 17 17:36:20.860340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:36:20.916979 systemd[1]: Reloading finished in 241 ms. Mar 17 17:36:20.935603 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:36:20.950264 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:36:20.960178 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:36:20.964105 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:36:20.973102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:36:20.980589 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:36:20.991147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:36:20.999345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:36:21.003855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:36:21.010209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:36:21.017086 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:36:21.021165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:36:21.029085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:36:21.029267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:36:21.040276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:36:21.040522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:36:21.044636 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:36:21.047121 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:36:21.052528 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:36:21.052847 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:36:21.068363 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:36:21.071048 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:36:21.080241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:36:21.091196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:36:21.094255 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:36:21.103164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:36:21.106190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:36:21.110540 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:36:21.111710 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:36:21.111871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:36:21.119872 systemd-resolved[1482]: Positive Trust Anchors: Mar 17 17:36:21.120458 systemd-resolved[1482]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:36:21.122236 augenrules[1525]: No rules Mar 17 17:36:21.120493 systemd-resolved[1482]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:36:21.121402 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:36:21.123841 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:36:21.125195 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:36:21.127323 systemd-resolved[1482]: Using system hostname 'ci-4152-2-2-4-e17a7af1b1'. Mar 17 17:36:21.127465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:36:21.127618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:36:21.128746 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:36:21.128895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:36:21.131545 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:36:21.138679 systemd[1]: Reached target network.target - Network. Mar 17 17:36:21.139589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:36:21.147387 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:36:21.148556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:36:21.153425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:36:21.165808 augenrules[1539]: /sbin/augenrules: No change Mar 17 17:36:21.167275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:36:21.171096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:36:21.175339 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:36:21.179413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:36:21.179576 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:36:21.180619 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:36:21.180860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:36:21.183970 augenrules[1561]: No rules Mar 17 17:36:21.186885 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:36:21.189050 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:36:21.190130 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:36:21.190277 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:36:21.194753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:36:21.195883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:36:21.197203 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:36:21.197599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:36:21.202182 systemd[1]: Finished ensure-sysext.service. Mar 17 17:36:21.207868 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:36:21.208096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:36:21.214126 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:36:21.269241 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:36:21.271710 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:36:21.273030 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:36:21.273831 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:36:21.274651 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:36:21.275795 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:36:21.275833 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:36:21.276399 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:36:21.277130 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:36:21.277848 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:36:21.278621 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:36:21.280568 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:36:21.283211 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:36:21.285226 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:36:21.288452 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:36:21.289776 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:36:21.291022 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:36:21.292091 systemd[1]: System is tainted: cgroupsv1 Mar 17 17:36:21.292244 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:36:21.292396 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:36:21.294037 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:36:21.298114 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:36:21.306141 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:36:21.310023 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:36:21.319221 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:36:21.323011 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:36:21.324056 jq[1588]: false Mar 17 17:36:21.330121 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:36:21.333713 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:36:21.343612 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 17:36:21.349635 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:36:21.359054 coreos-metadata[1583]: Mar 17 17:36:21.358 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 17:36:21.361874 dbus-daemon[1585]: [system] SELinux support is enabled Mar 17 17:36:21.377409 coreos-metadata[1583]: Mar 17 17:36:21.364 INFO Fetch successful Mar 17 17:36:21.377409 coreos-metadata[1583]: Mar 17 17:36:21.365 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 17:36:21.377409 coreos-metadata[1583]: Mar 17 17:36:21.372 INFO Fetch successful Mar 17 17:36:21.369428 systemd-networkd[1244]: eth0: Gained IPv6LL Mar 17 17:36:21.377745 extend-filesystems[1589]: Found loop4 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found loop5 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found loop6 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found loop7 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found sda Mar 17 17:36:21.377745 extend-filesystems[1589]: Found sda1 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found sda2 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found sda3 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found usr Mar 17 17:36:21.377745 extend-filesystems[1589]: Found sda4 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found sda6 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found sda7 Mar 17 17:36:21.377745 extend-filesystems[1589]: Found sda9 Mar 17 17:36:21.377745 extend-filesystems[1589]: Checking size of /dev/sda9 Mar 17 17:36:21.369891 systemd-networkd[1244]: eth1: Gained IPv6LL Mar 17 17:36:21.375043 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:36:21.384677 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:36:21.388472 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:36:21.395835 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:36:21.950913 jq[1612]: true Mar 17 17:36:21.407136 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:36:21.953950 extend-filesystems[1589]: Resized partition /dev/sda9 Mar 17 17:36:21.909334 systemd-timesyncd[1578]: Contacted time server 141.144.241.16:123 (0.flatcar.pool.ntp.org). Mar 17 17:36:21.964014 extend-filesystems[1625]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:36:21.975458 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 17:36:21.909400 systemd-timesyncd[1578]: Initial clock synchronization to Mon 2025-03-17 17:36:21.907548 UTC. Mar 17 17:36:21.909453 systemd-resolved[1482]: Clock change detected. Flushing caches. Mar 17 17:36:21.914597 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:36:21.920401 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:36:21.926554 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:36:21.926822 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:36:21.927068 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:36:21.927271 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:36:21.938070 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:36:21.938323 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:36:21.956218 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:36:21.972714 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:36:21.988361 jq[1624]: true Mar 17 17:36:21.987463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:21.995284 update_engine[1610]: I20250317 17:36:21.994333 1610 main.cc:92] Flatcar Update Engine starting Mar 17 17:36:21.999594 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:36:22.001275 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:36:22.003189 update_engine[1610]: I20250317 17:36:22.002755 1610 update_check_scheduler.cc:74] Next update check in 2m35s Mar 17 17:36:22.001574 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:36:22.003999 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:36:22.004020 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:36:22.012169 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:36:22.013418 tar[1618]: linux-arm64/helm Mar 17 17:36:22.022702 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:36:22.036543 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:36:22.103298 systemd-logind[1609]: New seat seat0. Mar 17 17:36:22.110925 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:36:22.110948 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 17 17:36:22.116642 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:36:22.126865 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:36:22.128176 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:36:22.133208 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:36:22.177361 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1235) Mar 17 17:36:22.197380 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 17:36:22.220057 extend-filesystems[1625]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:36:22.220057 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 17:36:22.220057 extend-filesystems[1625]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 17:36:22.232191 extend-filesystems[1589]: Resized filesystem in /dev/sda9 Mar 17 17:36:22.232191 extend-filesystems[1589]: Found sr0 Mar 17 17:36:22.245110 bash[1683]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:36:22.220832 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:36:22.221138 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:36:22.231873 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:36:22.274265 systemd[1]: Starting sshkeys.service... Mar 17 17:36:22.298071 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:36:22.313715 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:36:22.368155 coreos-metadata[1695]: Mar 17 17:36:22.368 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 17:36:22.371801 coreos-metadata[1695]: Mar 17 17:36:22.371 INFO Fetch successful Mar 17 17:36:22.374485 unknown[1695]: wrote ssh authorized keys file for user: core Mar 17 17:36:22.414185 update-ssh-keys[1702]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:36:22.415462 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:36:22.420018 locksmithd[1649]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:36:22.421684 systemd[1]: Finished sshkeys.service. Mar 17 17:36:22.439618 containerd[1622]: time="2025-03-17T17:36:22.437740336Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:36:22.525360 containerd[1622]: time="2025-03-17T17:36:22.524747056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:36:22.528831 containerd[1622]: time="2025-03-17T17:36:22.528784736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:36:22.529640 containerd[1622]: time="2025-03-17T17:36:22.529612536Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:36:22.530040 containerd[1622]: time="2025-03-17T17:36:22.530014736Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:36:22.530288 containerd[1622]: time="2025-03-17T17:36:22.530268936Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:36:22.531130 containerd[1622]: time="2025-03-17T17:36:22.531107376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:36:22.531294 containerd[1622]: time="2025-03-17T17:36:22.531273096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:36:22.531733 containerd[1622]: time="2025-03-17T17:36:22.531701016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:36:22.532074 containerd[1622]: time="2025-03-17T17:36:22.532048976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:36:22.533089 containerd[1622]: time="2025-03-17T17:36:22.533066216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:36:22.533205 containerd[1622]: time="2025-03-17T17:36:22.533187136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:36:22.533294 containerd[1622]: time="2025-03-17T17:36:22.533278976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:36:22.533518 containerd[1622]: time="2025-03-17T17:36:22.533497816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:36:22.534871 containerd[1622]: time="2025-03-17T17:36:22.534332136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:36:22.534871 containerd[1622]: time="2025-03-17T17:36:22.534530016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:36:22.534871 containerd[1622]: time="2025-03-17T17:36:22.534545336Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:36:22.534871 containerd[1622]: time="2025-03-17T17:36:22.534630336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:36:22.534871 containerd[1622]: time="2025-03-17T17:36:22.534678576Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542088456Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542142016Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542157576Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542175016Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542189416Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542373936Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542690896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542818296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542836336Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542850976Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542864416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542878056Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542890616Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:36:22.545586 containerd[1622]: time="2025-03-17T17:36:22.542903736Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.542918176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.542932416Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.542946336Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.542957976Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.542980416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543001696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543014576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543027136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543039416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543053696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543064696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543076776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543088296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.545945 containerd[1622]: time="2025-03-17T17:36:22.543106176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543117336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543129736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543142016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543157936Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543179296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543194896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543205336Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543387896Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543411496Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543421016Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543432336Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543440696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543452896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:36:22.546183 containerd[1622]: time="2025-03-17T17:36:22.543462336Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:36:22.546470 containerd[1622]: time="2025-03-17T17:36:22.543474056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:36:22.546494 containerd[1622]: time="2025-03-17T17:36:22.543868256Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:36:22.546494 containerd[1622]: time="2025-03-17T17:36:22.543923416Z" level=info msg="Connect containerd service" Mar 17 17:36:22.546494 containerd[1622]: time="2025-03-17T17:36:22.543963896Z" level=info msg="using legacy CRI server" Mar 17 17:36:22.546494 containerd[1622]: time="2025-03-17T17:36:22.543970216Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:36:22.546494 containerd[1622]: time="2025-03-17T17:36:22.544243816Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:36:22.551940 containerd[1622]: time="2025-03-17T17:36:22.551908096Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:36:22.552902 containerd[1622]: time="2025-03-17T17:36:22.552875816Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:36:22.555161 containerd[1622]: time="2025-03-17T17:36:22.554673496Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:36:22.556915 containerd[1622]: time="2025-03-17T17:36:22.555832336Z" level=info msg="Start subscribing containerd event" Mar 17 17:36:22.556915 containerd[1622]: time="2025-03-17T17:36:22.555888136Z" level=info msg="Start recovering state" Mar 17 17:36:22.556915 containerd[1622]: time="2025-03-17T17:36:22.555956696Z" level=info msg="Start event monitor" Mar 17 17:36:22.556915 containerd[1622]: time="2025-03-17T17:36:22.555967856Z" level=info msg="Start snapshots syncer" Mar 17 17:36:22.556915 containerd[1622]: time="2025-03-17T17:36:22.555976696Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:36:22.556915 containerd[1622]: time="2025-03-17T17:36:22.555986696Z" level=info msg="Start streaming server" Mar 17 17:36:22.556227 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:36:22.559844 containerd[1622]: time="2025-03-17T17:36:22.559822456Z" level=info msg="containerd successfully booted in 0.124275s" Mar 17 17:36:23.003535 tar[1618]: linux-arm64/LICENSE Mar 17 17:36:23.003535 tar[1618]: linux-arm64/README.md Mar 17 17:36:23.020288 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:36:23.064244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:23.073050 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:36:23.141979 sshd_keygen[1629]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:36:23.164475 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:36:23.174995 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:36:23.186671 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:36:23.186978 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:36:23.193807 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:36:23.204219 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:36:23.213236 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:36:23.222878 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:36:23.224621 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:36:23.225259 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:36:23.227710 systemd[1]: Startup finished in 7.157s (kernel) + 4.733s (userspace) = 11.891s. Mar 17 17:36:23.637698 kubelet[1726]: E0317 17:36:23.637573 1726 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:36:23.642671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:36:23.643069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:36:33.704654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:36:33.715591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:33.835571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:33.837506 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:36:33.889702 kubelet[1772]: E0317 17:36:33.889639 1772 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:36:33.894168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:36:33.894451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:36:43.955551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:36:43.968236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:44.085692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:44.087192 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:36:44.135630 kubelet[1793]: E0317 17:36:44.135561 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:36:44.138876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:36:44.139102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:36:54.204611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:36:54.213119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:54.337915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:54.342571 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:36:54.392848 kubelet[1815]: E0317 17:36:54.392786 1815 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:36:54.398727 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:36:54.398948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:04.454441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:37:04.464710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:04.590706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:04.600893 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:04.652273 kubelet[1836]: E0317 17:37:04.652215 1836 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:04.654787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:04.655119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:07.348470 update_engine[1610]: I20250317 17:37:07.347426 1610 update_attempter.cc:509] Updating boot flags... Mar 17 17:37:07.408531 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1854) Mar 17 17:37:07.464366 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1853) Mar 17 17:37:14.704331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:37:14.711702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:14.825534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:14.828698 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:14.870622 kubelet[1875]: E0317 17:37:14.870564 1875 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:14.872780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:14.872924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:24.953941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:37:24.964743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:25.070607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:25.071159 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:25.113115 kubelet[1897]: E0317 17:37:25.113036 1897 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:25.116714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:25.116960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:35.204310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 17:37:35.221177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:35.332572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:35.340824 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:35.388174 kubelet[1918]: E0317 17:37:35.388131 1918 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:35.391694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:35.391862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:45.454387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 17:37:45.466622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:45.571560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:45.576970 (kubelet)[1940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:45.629263 kubelet[1940]: E0317 17:37:45.629189 1940 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:45.633778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:45.634173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:55.704382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 17:37:55.711560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:55.829611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:55.830942 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:55.876461 kubelet[1961]: E0317 17:37:55.876389 1961 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:55.879031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:55.879214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:03.491433 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:38:03.500843 systemd[1]: Started sshd@0-138.201.116.42:22-139.178.89.65:49446.service - OpenSSH per-connection server daemon (139.178.89.65:49446). Mar 17 17:38:04.501013 sshd[1970]: Accepted publickey for core from 139.178.89.65 port 49446 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:04.502812 sshd-session[1970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:04.514169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:38:04.520669 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:38:04.523404 systemd-logind[1609]: New session 1 of user core. Mar 17 17:38:04.536570 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:38:04.553057 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:38:04.559534 (systemd)[1976]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:38:04.665505 systemd[1976]: Queued start job for default target default.target. Mar 17 17:38:04.665987 systemd[1976]: Created slice app.slice - User Application Slice. Mar 17 17:38:04.666006 systemd[1976]: Reached target paths.target - Paths. Mar 17 17:38:04.666032 systemd[1976]: Reached target timers.target - Timers. Mar 17 17:38:04.676579 systemd[1976]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:38:04.689598 systemd[1976]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:38:04.689813 systemd[1976]: Reached target sockets.target - Sockets. Mar 17 17:38:04.689923 systemd[1976]: Reached target basic.target - Basic System. Mar 17 17:38:04.690076 systemd[1976]: Reached target default.target - Main User Target. Mar 17 17:38:04.690260 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:38:04.691563 systemd[1976]: Startup finished in 123ms. Mar 17 17:38:04.693789 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:38:05.385862 systemd[1]: Started sshd@1-138.201.116.42:22-139.178.89.65:49448.service - OpenSSH per-connection server daemon (139.178.89.65:49448). Mar 17 17:38:05.954328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 17:38:05.965002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:06.072646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:06.074765 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:38:06.120364 kubelet[2002]: E0317 17:38:06.120276 2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:38:06.123926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:38:06.124169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:06.378812 sshd[1988]: Accepted publickey for core from 139.178.89.65 port 49448 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:06.381364 sshd-session[1988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:06.387252 systemd-logind[1609]: New session 2 of user core. Mar 17 17:38:06.395008 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:38:07.067064 sshd[2012]: Connection closed by 139.178.89.65 port 49448 Mar 17 17:38:07.068087 sshd-session[1988]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:07.072668 systemd-logind[1609]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:38:07.073001 systemd[1]: sshd@1-138.201.116.42:22-139.178.89.65:49448.service: Deactivated successfully. Mar 17 17:38:07.077328 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:38:07.078502 systemd-logind[1609]: Removed session 2. Mar 17 17:38:07.233751 systemd[1]: Started sshd@2-138.201.116.42:22-139.178.89.65:49460.service - OpenSSH per-connection server daemon (139.178.89.65:49460). Mar 17 17:38:08.235651 sshd[2017]: Accepted publickey for core from 139.178.89.65 port 49460 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:08.238111 sshd-session[2017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:08.244966 systemd-logind[1609]: New session 3 of user core. Mar 17 17:38:08.251912 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:38:08.915262 sshd[2020]: Connection closed by 139.178.89.65 port 49460 Mar 17 17:38:08.916234 sshd-session[2017]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:08.921570 systemd[1]: sshd@2-138.201.116.42:22-139.178.89.65:49460.service: Deactivated successfully. Mar 17 17:38:08.925539 systemd-logind[1609]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:38:08.925883 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:38:08.927686 systemd-logind[1609]: Removed session 3. Mar 17 17:38:09.082777 systemd[1]: Started sshd@3-138.201.116.42:22-139.178.89.65:49464.service - OpenSSH per-connection server daemon (139.178.89.65:49464). Mar 17 17:38:10.076885 sshd[2025]: Accepted publickey for core from 139.178.89.65 port 49464 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:10.078874 sshd-session[2025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:10.085580 systemd-logind[1609]: New session 4 of user core. Mar 17 17:38:10.088645 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:38:10.761512 sshd[2028]: Connection closed by 139.178.89.65 port 49464 Mar 17 17:38:10.762549 sshd-session[2025]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:10.766918 systemd[1]: sshd@3-138.201.116.42:22-139.178.89.65:49464.service: Deactivated successfully. Mar 17 17:38:10.770843 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:38:10.771977 systemd-logind[1609]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:38:10.772804 systemd-logind[1609]: Removed session 4. Mar 17 17:38:10.930768 systemd[1]: Started sshd@4-138.201.116.42:22-139.178.89.65:49470.service - OpenSSH per-connection server daemon (139.178.89.65:49470). Mar 17 17:38:11.905868 sshd[2033]: Accepted publickey for core from 139.178.89.65 port 49470 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:11.908096 sshd-session[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:11.915075 systemd-logind[1609]: New session 5 of user core. Mar 17 17:38:11.920121 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:38:12.435853 sudo[2037]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:38:12.436159 sudo[2037]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:12.453651 sudo[2037]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:12.611626 sshd[2036]: Connection closed by 139.178.89.65 port 49470 Mar 17 17:38:12.612814 sshd-session[2033]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:12.617508 systemd[1]: sshd@4-138.201.116.42:22-139.178.89.65:49470.service: Deactivated successfully. Mar 17 17:38:12.621202 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:38:12.622043 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:38:12.622993 systemd-logind[1609]: Removed session 5. Mar 17 17:38:12.782903 systemd[1]: Started sshd@5-138.201.116.42:22-139.178.89.65:47122.service - OpenSSH per-connection server daemon (139.178.89.65:47122). Mar 17 17:38:13.764019 sshd[2042]: Accepted publickey for core from 139.178.89.65 port 47122 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:13.766300 sshd-session[2042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:13.771905 systemd-logind[1609]: New session 6 of user core. Mar 17 17:38:13.777965 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:38:14.287143 sudo[2047]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:38:14.288068 sudo[2047]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:14.292050 sudo[2047]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:14.297922 sudo[2046]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:38:14.298559 sudo[2046]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:14.313972 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:38:14.343730 augenrules[2069]: No rules Mar 17 17:38:14.345010 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:38:14.345265 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:38:14.348542 sudo[2046]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:14.507425 sshd[2045]: Connection closed by 139.178.89.65 port 47122 Mar 17 17:38:14.508373 sshd-session[2042]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:14.513881 systemd[1]: sshd@5-138.201.116.42:22-139.178.89.65:47122.service: Deactivated successfully. Mar 17 17:38:14.518586 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:38:14.519585 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:38:14.520484 systemd-logind[1609]: Removed session 6. Mar 17 17:38:14.681928 systemd[1]: Started sshd@6-138.201.116.42:22-139.178.89.65:47134.service - OpenSSH per-connection server daemon (139.178.89.65:47134). Mar 17 17:38:15.670845 sshd[2078]: Accepted publickey for core from 139.178.89.65 port 47134 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:15.672890 sshd-session[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:15.679481 systemd-logind[1609]: New session 7 of user core. Mar 17 17:38:15.686749 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:38:16.196007 sudo[2082]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:38:16.196687 sudo[2082]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:16.197729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 17:38:16.206613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:16.350569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:16.352939 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:38:16.399707 kubelet[2103]: E0317 17:38:16.399631 2103 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:38:16.402731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:38:16.402867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:16.534776 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:38:16.536753 (dockerd)[2121]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:38:16.762917 dockerd[2121]: time="2025-03-17T17:38:16.762513170Z" level=info msg="Starting up" Mar 17 17:38:16.860933 dockerd[2121]: time="2025-03-17T17:38:16.860807537Z" level=info msg="Loading containers: start." Mar 17 17:38:17.028386 kernel: Initializing XFRM netlink socket Mar 17 17:38:17.116715 systemd-networkd[1244]: docker0: Link UP Mar 17 17:38:17.150395 dockerd[2121]: time="2025-03-17T17:38:17.149526545Z" level=info msg="Loading containers: done." Mar 17 17:38:17.162947 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck24659584-merged.mount: Deactivated successfully. Mar 17 17:38:17.166396 dockerd[2121]: time="2025-03-17T17:38:17.166312525Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:38:17.166505 dockerd[2121]: time="2025-03-17T17:38:17.166477613Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:38:17.166620 dockerd[2121]: time="2025-03-17T17:38:17.166587285Z" level=info msg="Daemon has completed initialization" Mar 17 17:38:17.200247 dockerd[2121]: time="2025-03-17T17:38:17.200172728Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:38:17.201023 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:38:18.349123 containerd[1622]: time="2025-03-17T17:38:18.349053298Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:38:19.018203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43284170.mount: Deactivated successfully. Mar 17 17:38:20.618063 containerd[1622]: time="2025-03-17T17:38:20.617976574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:20.619658 containerd[1622]: time="2025-03-17T17:38:20.619587212Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793616" Mar 17 17:38:20.620357 containerd[1622]: time="2025-03-17T17:38:20.620198619Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:20.624616 containerd[1622]: time="2025-03-17T17:38:20.624551604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:20.627718 containerd[1622]: time="2025-03-17T17:38:20.627007313Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.277904921s" Mar 17 17:38:20.627718 containerd[1622]: time="2025-03-17T17:38:20.627077973Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:38:20.652906 containerd[1622]: time="2025-03-17T17:38:20.652869918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:38:22.421483 containerd[1622]: time="2025-03-17T17:38:22.421403343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:22.423977 containerd[1622]: time="2025-03-17T17:38:22.423916234Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861187" Mar 17 17:38:22.424887 containerd[1622]: time="2025-03-17T17:38:22.424853077Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:22.431085 containerd[1622]: time="2025-03-17T17:38:22.430743642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:22.432982 containerd[1622]: time="2025-03-17T17:38:22.432652536Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.779587085s" Mar 17 17:38:22.432982 containerd[1622]: time="2025-03-17T17:38:22.432689065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:38:22.455589 containerd[1622]: time="2025-03-17T17:38:22.455467563Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:38:23.752447 containerd[1622]: time="2025-03-17T17:38:23.752392932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:23.753783 containerd[1622]: time="2025-03-17T17:38:23.753738392Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264656" Mar 17 17:38:23.756360 containerd[1622]: time="2025-03-17T17:38:23.754632058Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:23.761303 containerd[1622]: time="2025-03-17T17:38:23.761225882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:23.762393 containerd[1622]: time="2025-03-17T17:38:23.761988995Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.306461297s" Mar 17 17:38:23.762393 containerd[1622]: time="2025-03-17T17:38:23.762060373Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:38:23.784701 containerd[1622]: time="2025-03-17T17:38:23.784658639Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:38:24.783315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179356565.mount: Deactivated successfully. Mar 17 17:38:25.125567 containerd[1622]: time="2025-03-17T17:38:25.125364317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:25.126891 containerd[1622]: time="2025-03-17T17:38:25.126144624Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771874" Mar 17 17:38:25.127808 containerd[1622]: time="2025-03-17T17:38:25.127767574Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:25.130298 containerd[1622]: time="2025-03-17T17:38:25.130264214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:25.131104 containerd[1622]: time="2025-03-17T17:38:25.130981586Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.346284858s" Mar 17 17:38:25.131104 containerd[1622]: time="2025-03-17T17:38:25.131016715Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:38:25.155024 containerd[1622]: time="2025-03-17T17:38:25.154984472Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:38:25.781583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069858229.mount: Deactivated successfully. Mar 17 17:38:26.367059 containerd[1622]: time="2025-03-17T17:38:26.365891030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:26.368122 containerd[1622]: time="2025-03-17T17:38:26.368069220Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Mar 17 17:38:26.369763 containerd[1622]: time="2025-03-17T17:38:26.369682038Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:26.374134 containerd[1622]: time="2025-03-17T17:38:26.374048622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:26.375582 containerd[1622]: time="2025-03-17T17:38:26.375329162Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.220129518s" Mar 17 17:38:26.375582 containerd[1622]: time="2025-03-17T17:38:26.375382615Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:38:26.395024 containerd[1622]: time="2025-03-17T17:38:26.394984849Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:38:26.454080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 17:38:26.468959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:26.576313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:26.580330 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:38:26.625107 kubelet[2464]: E0317 17:38:26.624992 2464 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:38:26.629176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:38:26.629468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:26.945868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450821830.mount: Deactivated successfully. Mar 17 17:38:26.953110 containerd[1622]: time="2025-03-17T17:38:26.953022406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:26.954655 containerd[1622]: time="2025-03-17T17:38:26.954559606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Mar 17 17:38:26.955382 containerd[1622]: time="2025-03-17T17:38:26.955239366Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:26.958684 containerd[1622]: time="2025-03-17T17:38:26.958620278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:26.959764 containerd[1622]: time="2025-03-17T17:38:26.959599548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 564.575249ms" Mar 17 17:38:26.959764 containerd[1622]: time="2025-03-17T17:38:26.959640917Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:38:26.984318 containerd[1622]: time="2025-03-17T17:38:26.984051399Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:38:27.543241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630603554.mount: Deactivated successfully. Mar 17 17:38:30.344740 containerd[1622]: time="2025-03-17T17:38:30.344670506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:30.346789 containerd[1622]: time="2025-03-17T17:38:30.346722983Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Mar 17 17:38:30.347599 containerd[1622]: time="2025-03-17T17:38:30.347164677Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:30.351142 containerd[1622]: time="2025-03-17T17:38:30.351085031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:30.352793 containerd[1622]: time="2025-03-17T17:38:30.352587350Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.368480098s" Mar 17 17:38:30.352793 containerd[1622]: time="2025-03-17T17:38:30.352642482Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:38:35.554377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:35.576801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:35.592409 systemd[1]: Reloading requested from client PID 2593 ('systemctl') (unit session-7.scope)... Mar 17 17:38:35.592427 systemd[1]: Reloading... Mar 17 17:38:35.708378 zram_generator::config[2634]: No configuration found. Mar 17 17:38:35.815626 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:35.878618 systemd[1]: Reloading finished in 285 ms. Mar 17 17:38:35.938386 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:38:35.938488 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:38:35.938967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:35.946051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:36.064637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:36.074720 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:38:36.121037 kubelet[2693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:36.121037 kubelet[2693]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:38:36.121037 kubelet[2693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:36.121466 kubelet[2693]: I0317 17:38:36.121133 2693 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:38:37.219410 kubelet[2693]: I0317 17:38:37.219014 2693 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:38:37.219410 kubelet[2693]: I0317 17:38:37.219053 2693 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:38:37.219410 kubelet[2693]: I0317 17:38:37.219335 2693 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:38:37.237104 kubelet[2693]: E0317 17:38:37.237071 2693 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.201.116.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.237695 kubelet[2693]: I0317 17:38:37.237302 2693 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:38:37.250331 kubelet[2693]: I0317 17:38:37.250288 2693 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:38:37.250896 kubelet[2693]: I0317 17:38:37.250861 2693 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:38:37.251162 kubelet[2693]: I0317 17:38:37.250894 2693 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-4-e17a7af1b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:38:37.251267 kubelet[2693]: I0317 17:38:37.251225 2693 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:38:37.251267 kubelet[2693]: I0317 17:38:37.251239 2693 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:38:37.251595 kubelet[2693]: I0317 17:38:37.251398 2693 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:37.252555 kubelet[2693]: I0317 17:38:37.252527 2693 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:38:37.252555 kubelet[2693]: I0317 17:38:37.252552 2693 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:38:37.254241 kubelet[2693]: I0317 17:38:37.252702 2693 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:38:37.254241 kubelet[2693]: I0317 17:38:37.252809 2693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:38:37.255149 kubelet[2693]: I0317 17:38:37.255130 2693 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:38:37.255663 kubelet[2693]: I0317 17:38:37.255650 2693 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:38:37.255832 kubelet[2693]: W0317 17:38:37.255822 2693 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:38:37.257377 kubelet[2693]: I0317 17:38:37.257336 2693 server.go:1264] "Started kubelet" Mar 17 17:38:37.258455 kubelet[2693]: W0317 17:38:37.258403 2693 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.201.116.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.258526 kubelet[2693]: E0317 17:38:37.258468 2693 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.201.116.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.258553 kubelet[2693]: W0317 17:38:37.258538 2693 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.201.116.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-4-e17a7af1b1&limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.258577 kubelet[2693]: E0317 17:38:37.258566 2693 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.201.116.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-4-e17a7af1b1&limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.263322 kubelet[2693]: I0317 17:38:37.263281 2693 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:38:37.265170 kubelet[2693]: I0317 17:38:37.265119 2693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:38:37.265570 kubelet[2693]: I0317 17:38:37.265554 2693 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:38:37.266224 kubelet[2693]: I0317 17:38:37.265762 2693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:38:37.267977 kubelet[2693]: I0317 17:38:37.267940 2693 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:38:37.268844 kubelet[2693]: I0317 17:38:37.268808 2693 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:38:37.270275 kubelet[2693]: E0317 17:38:37.270107 2693 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.201.116.42:6443/api/v1/namespaces/default/events\": dial tcp 138.201.116.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-2-4-e17a7af1b1.182da7ca160b2e2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-4-e17a7af1b1,UID:ci-4152-2-2-4-e17a7af1b1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-4-e17a7af1b1,},FirstTimestamp:2025-03-17 17:38:37.257313839 +0000 UTC m=+1.179180107,LastTimestamp:2025-03-17 17:38:37.257313839 +0000 UTC m=+1.179180107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-4-e17a7af1b1,}" Mar 17 17:38:37.274372 kubelet[2693]: W0317 17:38:37.274312 2693 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.201.116.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.274505 kubelet[2693]: E0317 17:38:37.274492 2693 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.201.116.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.274559 kubelet[2693]: I0317 17:38:37.274429 2693 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:38:37.274683 kubelet[2693]: E0317 17:38:37.274655 2693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.116.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-4-e17a7af1b1?timeout=10s\": dial tcp 138.201.116.42:6443: connect: connection refused" interval="200ms" Mar 17 17:38:37.275044 kubelet[2693]: I0317 17:38:37.274555 2693 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:38:37.276114 kubelet[2693]: I0317 17:38:37.276097 2693 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:38:37.276206 kubelet[2693]: I0317 17:38:37.276196 2693 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:38:37.276369 kubelet[2693]: I0317 17:38:37.276331 2693 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:38:37.285395 kubelet[2693]: I0317 17:38:37.284235 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:38:37.285395 kubelet[2693]: I0317 17:38:37.285214 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:38:37.285515 kubelet[2693]: I0317 17:38:37.285462 2693 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:38:37.285515 kubelet[2693]: I0317 17:38:37.285489 2693 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:38:37.285561 kubelet[2693]: E0317 17:38:37.285532 2693 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:38:37.292415 kubelet[2693]: E0317 17:38:37.292392 2693 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:38:37.295301 kubelet[2693]: W0317 17:38:37.295256 2693 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.201.116.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.295408 kubelet[2693]: E0317 17:38:37.295308 2693 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.201.116.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:37.301621 kubelet[2693]: I0317 17:38:37.301586 2693 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:38:37.301621 kubelet[2693]: I0317 17:38:37.301606 2693 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:38:37.301621 kubelet[2693]: I0317 17:38:37.301626 2693 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:37.303590 kubelet[2693]: I0317 17:38:37.303567 2693 policy_none.go:49] "None policy: Start" Mar 17 17:38:37.304280 kubelet[2693]: I0317 17:38:37.304259 2693 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:38:37.304345 kubelet[2693]: I0317 17:38:37.304293 2693 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:38:37.310387 kubelet[2693]: I0317 17:38:37.309780 2693 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:38:37.310387 kubelet[2693]: I0317 17:38:37.310045 2693 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:38:37.310387 kubelet[2693]: I0317 17:38:37.310156 2693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:38:37.312890 kubelet[2693]: E0317 17:38:37.312866 2693 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-2-4-e17a7af1b1\" not found" Mar 17 17:38:37.372530 kubelet[2693]: I0317 17:38:37.372489 2693 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.373042 kubelet[2693]: E0317 17:38:37.372968 2693 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.201.116.42:6443/api/v1/nodes\": dial tcp 138.201.116.42:6443: connect: connection refused" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.386641 kubelet[2693]: I0317 17:38:37.386430 2693 topology_manager.go:215] "Topology Admit Handler" podUID="a147e7358b33b155bf74359def683d9e" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.389268 kubelet[2693]: I0317 17:38:37.389160 2693 topology_manager.go:215] "Topology Admit Handler" podUID="46fe30f3432637902d1c5340cdcad885" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.391500 kubelet[2693]: I0317 17:38:37.391280 2693 topology_manager.go:215] "Topology Admit Handler" podUID="6567e8ca4128423cdcbd057f1f7599d1" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.476607 kubelet[2693]: E0317 17:38:37.475646 2693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.116.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-4-e17a7af1b1?timeout=10s\": dial tcp 138.201.116.42:6443: connect: connection refused" interval="400ms" Mar 17 17:38:37.476607 kubelet[2693]: I0317 17:38:37.475770 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a147e7358b33b155bf74359def683d9e-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-4-e17a7af1b1\" (UID: \"a147e7358b33b155bf74359def683d9e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.476607 kubelet[2693]: I0317 17:38:37.475834 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a147e7358b33b155bf74359def683d9e-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-4-e17a7af1b1\" (UID: \"a147e7358b33b155bf74359def683d9e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.476607 kubelet[2693]: I0317 17:38:37.475884 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a147e7358b33b155bf74359def683d9e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-4-e17a7af1b1\" (UID: \"a147e7358b33b155bf74359def683d9e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.476607 kubelet[2693]: I0317 17:38:37.475931 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.477150 kubelet[2693]: I0317 17:38:37.475974 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.477150 kubelet[2693]: I0317 17:38:37.476043 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.477150 kubelet[2693]: I0317 17:38:37.476089 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6567e8ca4128423cdcbd057f1f7599d1-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-4-e17a7af1b1\" (UID: \"6567e8ca4128423cdcbd057f1f7599d1\") " pod="kube-system/kube-scheduler-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.477150 kubelet[2693]: I0317 17:38:37.476130 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.477150 kubelet[2693]: I0317 17:38:37.476169 2693 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.577464 kubelet[2693]: I0317 17:38:37.576751 2693 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.577464 kubelet[2693]: E0317 17:38:37.577374 2693 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.201.116.42:6443/api/v1/nodes\": dial tcp 138.201.116.42:6443: connect: connection refused" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.700196 containerd[1622]: time="2025-03-17T17:38:37.700114077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-4-e17a7af1b1,Uid:46fe30f3432637902d1c5340cdcad885,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:37.701567 containerd[1622]: time="2025-03-17T17:38:37.701158667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-4-e17a7af1b1,Uid:a147e7358b33b155bf74359def683d9e,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:37.705843 containerd[1622]: time="2025-03-17T17:38:37.705768021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-4-e17a7af1b1,Uid:6567e8ca4128423cdcbd057f1f7599d1,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:37.876998 kubelet[2693]: E0317 17:38:37.876806 2693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.116.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-4-e17a7af1b1?timeout=10s\": dial tcp 138.201.116.42:6443: connect: connection refused" interval="800ms" Mar 17 17:38:37.981050 kubelet[2693]: I0317 17:38:37.980573 2693 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:37.981413 kubelet[2693]: E0317 17:38:37.981385 2693 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.201.116.42:6443/api/v1/nodes\": dial tcp 138.201.116.42:6443: connect: connection refused" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:38.145753 kubelet[2693]: W0317 17:38:38.145570 2693 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.201.116.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:38.145753 kubelet[2693]: E0317 17:38:38.145636 2693 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.201.116.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:38.181035 kubelet[2693]: W0317 17:38:38.180882 2693 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.201.116.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:38.181035 kubelet[2693]: E0317 17:38:38.181000 2693 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.201.116.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:38.226153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799491309.mount: Deactivated successfully. Mar 17 17:38:38.231391 containerd[1622]: time="2025-03-17T17:38:38.231001390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:38.232835 containerd[1622]: time="2025-03-17T17:38:38.232731816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 17 17:38:38.235308 containerd[1622]: time="2025-03-17T17:38:38.235248462Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:38.237470 containerd[1622]: time="2025-03-17T17:38:38.237418366Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:38.238824 containerd[1622]: time="2025-03-17T17:38:38.238757803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:38:38.241148 containerd[1622]: time="2025-03-17T17:38:38.241090856Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:38.242081 containerd[1622]: time="2025-03-17T17:38:38.242023061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:38:38.244926 containerd[1622]: time="2025-03-17T17:38:38.244856002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:38:38.247432 containerd[1622]: time="2025-03-17T17:38:38.247379809Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.115305ms" Mar 17 17:38:38.249279 containerd[1622]: time="2025-03-17T17:38:38.249216494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.948408ms" Mar 17 17:38:38.250441 containerd[1622]: time="2025-03-17T17:38:38.250401664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.55887ms" Mar 17 17:38:38.367669 containerd[1622]: time="2025-03-17T17:38:38.367529160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:38.368287 containerd[1622]: time="2025-03-17T17:38:38.367937072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:38.368287 containerd[1622]: time="2025-03-17T17:38:38.367962077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:38.368287 containerd[1622]: time="2025-03-17T17:38:38.368055493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:38.370786 containerd[1622]: time="2025-03-17T17:38:38.370598664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:38.370786 containerd[1622]: time="2025-03-17T17:38:38.370657394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:38.370786 containerd[1622]: time="2025-03-17T17:38:38.370672717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:38.371874 containerd[1622]: time="2025-03-17T17:38:38.371645369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:38.373862 containerd[1622]: time="2025-03-17T17:38:38.372797973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:38.373862 containerd[1622]: time="2025-03-17T17:38:38.372858264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:38.373862 containerd[1622]: time="2025-03-17T17:38:38.372873586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:38.373862 containerd[1622]: time="2025-03-17T17:38:38.372950680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:38.449315 containerd[1622]: time="2025-03-17T17:38:38.449207660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-4-e17a7af1b1,Uid:6567e8ca4128423cdcbd057f1f7599d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a1c2525d6fa34775c838ddf4a10ef3387a826a4a6b5e25639d0993b18ccc218\"" Mar 17 17:38:38.453445 containerd[1622]: time="2025-03-17T17:38:38.452514286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-4-e17a7af1b1,Uid:a147e7358b33b155bf74359def683d9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b5680d01c3cd0ef9b683ede30de8c71eaf317f483eceaf8de6277611f02c059\"" Mar 17 17:38:38.460686 containerd[1622]: time="2025-03-17T17:38:38.460632443Z" level=info msg="CreateContainer within sandbox \"9a1c2525d6fa34775c838ddf4a10ef3387a826a4a6b5e25639d0993b18ccc218\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:38:38.461915 containerd[1622]: time="2025-03-17T17:38:38.461876903Z" level=info msg="CreateContainer within sandbox \"5b5680d01c3cd0ef9b683ede30de8c71eaf317f483eceaf8de6277611f02c059\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:38:38.467273 containerd[1622]: time="2025-03-17T17:38:38.467231811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-4-e17a7af1b1,Uid:46fe30f3432637902d1c5340cdcad885,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8a02cab923639f925d07bef340adc7e48c5bd24817d981d93071725759db173\"" Mar 17 17:38:38.475821 containerd[1622]: time="2025-03-17T17:38:38.475777564Z" level=info msg="CreateContainer within sandbox \"f8a02cab923639f925d07bef340adc7e48c5bd24817d981d93071725759db173\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:38:38.488001 kubelet[2693]: W0317 17:38:38.487913 2693 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.201.116.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:38.488610 kubelet[2693]: E0317 17:38:38.488521 2693 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.201.116.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:38.491222 containerd[1622]: time="2025-03-17T17:38:38.491090675Z" level=info msg="CreateContainer within sandbox \"9a1c2525d6fa34775c838ddf4a10ef3387a826a4a6b5e25639d0993b18ccc218\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"195f4420aad92ff70393986ed1c4cbe07d7a0fe207489c14dd0b6bd186c10738\"" Mar 17 17:38:38.492213 containerd[1622]: time="2025-03-17T17:38:38.492159745Z" level=info msg="StartContainer for \"195f4420aad92ff70393986ed1c4cbe07d7a0fe207489c14dd0b6bd186c10738\"" Mar 17 17:38:38.501798 containerd[1622]: time="2025-03-17T17:38:38.501357133Z" level=info msg="CreateContainer within sandbox \"5b5680d01c3cd0ef9b683ede30de8c71eaf317f483eceaf8de6277611f02c059\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5eebccfa510cd092eb4af7c97c063b96c582abcd7ab9fc0cd65a4b6c6af4096b\"" Mar 17 17:38:38.502746 containerd[1622]: time="2025-03-17T17:38:38.502684728Z" level=info msg="StartContainer for \"5eebccfa510cd092eb4af7c97c063b96c582abcd7ab9fc0cd65a4b6c6af4096b\"" Mar 17 17:38:38.504392 containerd[1622]: time="2025-03-17T17:38:38.504274089Z" level=info msg="CreateContainer within sandbox \"f8a02cab923639f925d07bef340adc7e48c5bd24817d981d93071725759db173\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7e6be468dbda955832ca047d7bba4b72f4429c624fb77938f35f45b6c6636725\"" Mar 17 17:38:38.505507 containerd[1622]: time="2025-03-17T17:38:38.505164567Z" level=info msg="StartContainer for \"7e6be468dbda955832ca047d7bba4b72f4429c624fb77938f35f45b6c6636725\"" Mar 17 17:38:38.589584 containerd[1622]: time="2025-03-17T17:38:38.589540625Z" level=info msg="StartContainer for \"195f4420aad92ff70393986ed1c4cbe07d7a0fe207489c14dd0b6bd186c10738\" returns successfully" Mar 17 17:38:38.604282 containerd[1622]: time="2025-03-17T17:38:38.604219303Z" level=info msg="StartContainer for \"5eebccfa510cd092eb4af7c97c063b96c582abcd7ab9fc0cd65a4b6c6af4096b\" returns successfully" Mar 17 17:38:38.634138 containerd[1622]: time="2025-03-17T17:38:38.634097673Z" level=info msg="StartContainer for \"7e6be468dbda955832ca047d7bba4b72f4429c624fb77938f35f45b6c6636725\" returns successfully" Mar 17 17:38:38.678247 kubelet[2693]: E0317 17:38:38.678161 2693 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.116.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-4-e17a7af1b1?timeout=10s\": dial tcp 138.201.116.42:6443: connect: connection refused" interval="1.6s" Mar 17 17:38:38.689284 kubelet[2693]: W0317 17:38:38.689145 2693 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.201.116.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-4-e17a7af1b1&limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:38.689284 kubelet[2693]: E0317 17:38:38.689253 2693 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.201.116.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-4-e17a7af1b1&limit=500&resourceVersion=0": dial tcp 138.201.116.42:6443: connect: connection refused Mar 17 17:38:38.784925 kubelet[2693]: I0317 17:38:38.784860 2693 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:41.041931 kubelet[2693]: I0317 17:38:41.041887 2693 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:41.053772 kubelet[2693]: E0317 17:38:41.053554 2693 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-2-4-e17a7af1b1.182da7ca160b2e2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-4-e17a7af1b1,UID:ci-4152-2-2-4-e17a7af1b1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-4-e17a7af1b1,},FirstTimestamp:2025-03-17 17:38:37.257313839 +0000 UTC m=+1.179180107,LastTimestamp:2025-03-17 17:38:37.257313839 +0000 UTC m=+1.179180107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-4-e17a7af1b1,}" Mar 17 17:38:41.256901 kubelet[2693]: I0317 17:38:41.256388 2693 apiserver.go:52] "Watching apiserver" Mar 17 17:38:41.275283 kubelet[2693]: I0317 17:38:41.275251 2693 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:38:43.012259 systemd[1]: Reloading requested from client PID 2969 ('systemctl') (unit session-7.scope)... Mar 17 17:38:43.012277 systemd[1]: Reloading... Mar 17 17:38:43.126375 zram_generator::config[3015]: No configuration found. Mar 17 17:38:43.275473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:43.354931 systemd[1]: Reloading finished in 342 ms. Mar 17 17:38:43.390757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:43.404935 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:38:43.405773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:43.415723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:43.568937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:43.585133 (kubelet)[3064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:38:43.633119 kubelet[3064]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:43.633119 kubelet[3064]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:38:43.633119 kubelet[3064]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:43.633716 kubelet[3064]: I0317 17:38:43.633163 3064 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:38:43.640002 kubelet[3064]: I0317 17:38:43.639621 3064 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:38:43.640002 kubelet[3064]: I0317 17:38:43.639678 3064 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:38:43.640002 kubelet[3064]: I0317 17:38:43.639890 3064 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:38:43.641487 kubelet[3064]: I0317 17:38:43.641448 3064 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:38:43.643942 kubelet[3064]: I0317 17:38:43.643107 3064 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:38:43.656838 kubelet[3064]: I0317 17:38:43.656807 3064 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:38:43.657414 kubelet[3064]: I0317 17:38:43.657378 3064 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:38:43.657625 kubelet[3064]: I0317 17:38:43.657419 3064 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-4-e17a7af1b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:38:43.657720 kubelet[3064]: I0317 17:38:43.657633 3064 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:38:43.657720 kubelet[3064]: I0317 17:38:43.657643 3064 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:38:43.657720 kubelet[3064]: I0317 17:38:43.657683 3064 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:43.657822 kubelet[3064]: I0317 17:38:43.657810 3064 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:38:43.657877 kubelet[3064]: I0317 17:38:43.657826 3064 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:38:43.657877 kubelet[3064]: I0317 17:38:43.657863 3064 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:38:43.657930 kubelet[3064]: I0317 17:38:43.657883 3064 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:38:43.661448 kubelet[3064]: I0317 17:38:43.660313 3064 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:38:43.661448 kubelet[3064]: I0317 17:38:43.660637 3064 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:38:43.661448 kubelet[3064]: I0317 17:38:43.661223 3064 server.go:1264] "Started kubelet" Mar 17 17:38:43.668566 kubelet[3064]: I0317 17:38:43.668537 3064 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:38:43.678409 kubelet[3064]: I0317 17:38:43.677674 3064 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:38:43.679281 kubelet[3064]: I0317 17:38:43.679240 3064 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:38:43.689371 kubelet[3064]: I0317 17:38:43.686434 3064 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:38:43.689371 kubelet[3064]: I0317 17:38:43.686697 3064 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:38:43.689967 kubelet[3064]: I0317 17:38:43.689766 3064 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:38:43.693630 kubelet[3064]: I0317 17:38:43.693606 3064 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:38:43.693772 kubelet[3064]: I0317 17:38:43.693744 3064 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:38:43.696417 kubelet[3064]: I0317 17:38:43.695718 3064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:38:43.697240 kubelet[3064]: I0317 17:38:43.697002 3064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:38:43.697240 kubelet[3064]: I0317 17:38:43.697071 3064 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:38:43.697240 kubelet[3064]: I0317 17:38:43.697089 3064 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:38:43.697240 kubelet[3064]: E0317 17:38:43.697203 3064 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:38:43.705474 kubelet[3064]: I0317 17:38:43.705447 3064 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:38:43.705702 kubelet[3064]: I0317 17:38:43.705681 3064 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:38:43.711266 kubelet[3064]: I0317 17:38:43.711234 3064 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:38:43.772335 kubelet[3064]: I0317 17:38:43.772248 3064 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:38:43.772727 kubelet[3064]: I0317 17:38:43.772703 3064 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:38:43.772875 kubelet[3064]: I0317 17:38:43.772848 3064 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:43.773287 kubelet[3064]: I0317 17:38:43.773257 3064 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:38:43.773858 kubelet[3064]: I0317 17:38:43.773447 3064 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:38:43.773858 kubelet[3064]: I0317 17:38:43.773489 3064 policy_none.go:49] "None policy: Start" Mar 17 17:38:43.775376 kubelet[3064]: I0317 17:38:43.774737 3064 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:38:43.775376 kubelet[3064]: I0317 17:38:43.774765 3064 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:38:43.775376 kubelet[3064]: I0317 17:38:43.774932 3064 state_mem.go:75] "Updated machine memory state" Mar 17 17:38:43.776552 kubelet[3064]: I0317 17:38:43.776528 3064 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:38:43.776829 kubelet[3064]: I0317 17:38:43.776792 3064 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:38:43.776985 kubelet[3064]: I0317 17:38:43.776973 3064 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:38:43.793966 kubelet[3064]: I0317 17:38:43.793933 3064 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.797955 kubelet[3064]: I0317 17:38:43.797862 3064 topology_manager.go:215] "Topology Admit Handler" podUID="a147e7358b33b155bf74359def683d9e" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.798073 kubelet[3064]: I0317 17:38:43.798021 3064 topology_manager.go:215] "Topology Admit Handler" podUID="46fe30f3432637902d1c5340cdcad885" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.798315 kubelet[3064]: I0317 17:38:43.798133 3064 topology_manager.go:215] "Topology Admit Handler" podUID="6567e8ca4128423cdcbd057f1f7599d1" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.821955 kubelet[3064]: I0317 17:38:43.820094 3064 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.821955 kubelet[3064]: I0317 17:38:43.820184 3064 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994345 kubelet[3064]: I0317 17:38:43.994285 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994505 kubelet[3064]: I0317 17:38:43.994334 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994505 kubelet[3064]: I0317 17:38:43.994452 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994505 kubelet[3064]: I0317 17:38:43.994489 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6567e8ca4128423cdcbd057f1f7599d1-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-4-e17a7af1b1\" (UID: \"6567e8ca4128423cdcbd057f1f7599d1\") " pod="kube-system/kube-scheduler-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994580 kubelet[3064]: I0317 17:38:43.994511 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a147e7358b33b155bf74359def683d9e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-4-e17a7af1b1\" (UID: \"a147e7358b33b155bf74359def683d9e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994580 kubelet[3064]: I0317 17:38:43.994530 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a147e7358b33b155bf74359def683d9e-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-4-e17a7af1b1\" (UID: \"a147e7358b33b155bf74359def683d9e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994580 kubelet[3064]: I0317 17:38:43.994568 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994666 kubelet[3064]: I0317 17:38:43.994589 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46fe30f3432637902d1c5340cdcad885-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-4-e17a7af1b1\" (UID: \"46fe30f3432637902d1c5340cdcad885\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:43.994666 kubelet[3064]: I0317 17:38:43.994609 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a147e7358b33b155bf74359def683d9e-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-4-e17a7af1b1\" (UID: \"a147e7358b33b155bf74359def683d9e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:44.659041 kubelet[3064]: I0317 17:38:44.658804 3064 apiserver.go:52] "Watching apiserver" Mar 17 17:38:44.694362 kubelet[3064]: I0317 17:38:44.694302 3064 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:38:44.790414 kubelet[3064]: E0317 17:38:44.789591 3064 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-2-4-e17a7af1b1\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-2-4-e17a7af1b1" Mar 17 17:38:44.862644 kubelet[3064]: I0317 17:38:44.862489 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-2-4-e17a7af1b1" podStartSLOduration=1.862471674 podStartE2EDuration="1.862471674s" podCreationTimestamp="2025-03-17 17:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:44.832362149 +0000 UTC m=+1.242664099" watchObservedRunningTime="2025-03-17 17:38:44.862471674 +0000 UTC m=+1.272773624" Mar 17 17:38:44.883144 kubelet[3064]: I0317 17:38:44.882009 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-2-4-e17a7af1b1" podStartSLOduration=1.8819892710000001 podStartE2EDuration="1.881989271s" podCreationTimestamp="2025-03-17 17:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:44.863949344 +0000 UTC m=+1.274251334" watchObservedRunningTime="2025-03-17 17:38:44.881989271 +0000 UTC m=+1.292291221" Mar 17 17:38:44.920909 kubelet[3064]: I0317 17:38:44.920498 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-2-4-e17a7af1b1" podStartSLOduration=1.920474058 podStartE2EDuration="1.920474058s" podCreationTimestamp="2025-03-17 17:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:44.885738854 +0000 UTC m=+1.296040764" watchObservedRunningTime="2025-03-17 17:38:44.920474058 +0000 UTC m=+1.330776008" Mar 17 17:38:49.108042 sudo[2082]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:49.267734 sshd[2081]: Connection closed by 139.178.89.65 port 47134 Mar 17 17:38:49.270248 sshd-session[2078]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:49.274249 systemd[1]: sshd@6-138.201.116.42:22-139.178.89.65:47134.service: Deactivated successfully. Mar 17 17:38:49.282357 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:38:49.283184 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:38:49.285974 systemd-logind[1609]: Removed session 7. Mar 17 17:38:57.119660 kubelet[3064]: I0317 17:38:57.119097 3064 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:38:57.122580 containerd[1622]: time="2025-03-17T17:38:57.119491118Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:38:57.122924 kubelet[3064]: I0317 17:38:57.121029 3064 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:38:57.344633 update_engine[1610]: I20250317 17:38:57.344427 1610 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 17:38:57.344633 update_engine[1610]: I20250317 17:38:57.344635 1610 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 17:38:57.345191 update_engine[1610]: I20250317 17:38:57.345062 1610 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 17:38:57.348620 update_engine[1610]: I20250317 17:38:57.346992 1610 omaha_request_params.cc:62] Current group set to stable Mar 17 17:38:57.348620 update_engine[1610]: I20250317 17:38:57.347183 1610 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 17:38:57.348620 update_engine[1610]: I20250317 17:38:57.347210 1610 update_attempter.cc:643] Scheduling an action processor start. Mar 17 17:38:57.348620 update_engine[1610]: I20250317 17:38:57.347240 1610 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:38:57.348620 update_engine[1610]: I20250317 17:38:57.347296 1610 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 17:38:57.348620 update_engine[1610]: I20250317 17:38:57.347409 1610 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:38:57.348620 update_engine[1610]: I20250317 17:38:57.347425 1610 omaha_request_action.cc:272] Request: Mar 17 17:38:57.348620 update_engine[1610]: Mar 17 17:38:57.348620 update_engine[1610]: Mar 17 17:38:57.348620 update_engine[1610]: Mar 17 17:38:57.348620 update_engine[1610]: Mar 17 17:38:57.348620 update_engine[1610]: Mar 17 17:38:57.348620 update_engine[1610]: Mar 17 17:38:57.348620 update_engine[1610]: Mar 17 17:38:57.348620 update_engine[1610]: Mar 17 17:38:57.348620 update_engine[1610]: I20250317 17:38:57.347434 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:38:57.349132 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 17:38:57.349754 update_engine[1610]: I20250317 17:38:57.349700 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:38:57.350123 update_engine[1610]: I20250317 17:38:57.350085 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:38:57.352687 update_engine[1610]: E20250317 17:38:57.352598 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:38:57.352835 update_engine[1610]: I20250317 17:38:57.352703 1610 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 17:38:57.561207 kubelet[3064]: I0317 17:38:57.556892 3064 topology_manager.go:215] "Topology Admit Handler" podUID="dc322a93-c489-4038-ae56-e17afd1363e5" podNamespace="kube-system" podName="kube-proxy-dvvtn" Mar 17 17:38:57.566233 kubelet[3064]: I0317 17:38:57.566176 3064 topology_manager.go:215] "Topology Admit Handler" podUID="fa1b7028-9c9b-4db0-8f0b-d69cd3a3c9eb" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-vt7zv" Mar 17 17:38:57.685724 kubelet[3064]: I0317 17:38:57.685629 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc322a93-c489-4038-ae56-e17afd1363e5-xtables-lock\") pod \"kube-proxy-dvvtn\" (UID: \"dc322a93-c489-4038-ae56-e17afd1363e5\") " pod="kube-system/kube-proxy-dvvtn" Mar 17 17:38:57.685724 kubelet[3064]: I0317 17:38:57.685681 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc322a93-c489-4038-ae56-e17afd1363e5-lib-modules\") pod \"kube-proxy-dvvtn\" (UID: \"dc322a93-c489-4038-ae56-e17afd1363e5\") " pod="kube-system/kube-proxy-dvvtn" Mar 17 17:38:57.685724 kubelet[3064]: I0317 17:38:57.685707 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d854w\" (UniqueName: \"kubernetes.io/projected/fa1b7028-9c9b-4db0-8f0b-d69cd3a3c9eb-kube-api-access-d854w\") pod \"tigera-operator-6479d6dc54-vt7zv\" (UID: \"fa1b7028-9c9b-4db0-8f0b-d69cd3a3c9eb\") " pod="tigera-operator/tigera-operator-6479d6dc54-vt7zv" Mar 17 17:38:57.685724 kubelet[3064]: I0317 17:38:57.685728 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjnx\" (UniqueName: \"kubernetes.io/projected/dc322a93-c489-4038-ae56-e17afd1363e5-kube-api-access-pxjnx\") pod \"kube-proxy-dvvtn\" (UID: \"dc322a93-c489-4038-ae56-e17afd1363e5\") " pod="kube-system/kube-proxy-dvvtn" Mar 17 17:38:57.685724 kubelet[3064]: I0317 17:38:57.685747 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc322a93-c489-4038-ae56-e17afd1363e5-kube-proxy\") pod \"kube-proxy-dvvtn\" (UID: \"dc322a93-c489-4038-ae56-e17afd1363e5\") " pod="kube-system/kube-proxy-dvvtn" Mar 17 17:38:57.686141 kubelet[3064]: I0317 17:38:57.685764 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa1b7028-9c9b-4db0-8f0b-d69cd3a3c9eb-var-lib-calico\") pod \"tigera-operator-6479d6dc54-vt7zv\" (UID: \"fa1b7028-9c9b-4db0-8f0b-d69cd3a3c9eb\") " pod="tigera-operator/tigera-operator-6479d6dc54-vt7zv" Mar 17 17:38:57.863450 containerd[1622]: time="2025-03-17T17:38:57.863300609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvvtn,Uid:dc322a93-c489-4038-ae56-e17afd1363e5,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:57.882321 containerd[1622]: time="2025-03-17T17:38:57.881741320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-vt7zv,Uid:fa1b7028-9c9b-4db0-8f0b-d69cd3a3c9eb,Namespace:tigera-operator,Attempt:0,}" Mar 17 17:38:57.896389 containerd[1622]: time="2025-03-17T17:38:57.896252556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:57.900613 containerd[1622]: time="2025-03-17T17:38:57.899503230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:57.900613 containerd[1622]: time="2025-03-17T17:38:57.899887356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:57.900613 containerd[1622]: time="2025-03-17T17:38:57.900065098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:57.918484 containerd[1622]: time="2025-03-17T17:38:57.918399356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:57.918661 containerd[1622]: time="2025-03-17T17:38:57.918588019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:57.918712 containerd[1622]: time="2025-03-17T17:38:57.918690632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:57.918905 containerd[1622]: time="2025-03-17T17:38:57.918870173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:57.955490 containerd[1622]: time="2025-03-17T17:38:57.955454361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvvtn,Uid:dc322a93-c489-4038-ae56-e17afd1363e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4771025bf31c1d4e7354390caf24d2da43d35a6bca7499cff7ed60f029cf56ed\"" Mar 17 17:38:57.963220 containerd[1622]: time="2025-03-17T17:38:57.963007395Z" level=info msg="CreateContainer within sandbox \"4771025bf31c1d4e7354390caf24d2da43d35a6bca7499cff7ed60f029cf56ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:38:57.985558 containerd[1622]: time="2025-03-17T17:38:57.985402905Z" level=info msg="CreateContainer within sandbox \"4771025bf31c1d4e7354390caf24d2da43d35a6bca7499cff7ed60f029cf56ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"38401822378daef831abd2cb344279369baa876e466bb8edac8f2785d73bb2d0\"" Mar 17 17:38:57.988720 containerd[1622]: time="2025-03-17T17:38:57.987613292Z" level=info msg="StartContainer for \"38401822378daef831abd2cb344279369baa876e466bb8edac8f2785d73bb2d0\"" Mar 17 17:38:57.992100 containerd[1622]: time="2025-03-17T17:38:57.992066791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-vt7zv,Uid:fa1b7028-9c9b-4db0-8f0b-d69cd3a3c9eb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"79f18c4e3fb6080dba658221d25814b4e14a42915748add4c4e6a3d9a19fcb55\"" Mar 17 17:38:57.995869 containerd[1622]: time="2025-03-17T17:38:57.995840008Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 17 17:38:58.056617 containerd[1622]: time="2025-03-17T17:38:58.056550078Z" level=info msg="StartContainer for \"38401822378daef831abd2cb344279369baa876e466bb8edac8f2785d73bb2d0\" returns successfully" Mar 17 17:39:01.964066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007726586.mount: Deactivated successfully. Mar 17 17:39:02.318156 containerd[1622]: time="2025-03-17T17:39:02.318044253Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:02.320234 containerd[1622]: time="2025-03-17T17:39:02.320004191Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 17 17:39:02.320234 containerd[1622]: time="2025-03-17T17:39:02.320161008Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:02.323192 containerd[1622]: time="2025-03-17T17:39:02.323123417Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:02.325763 containerd[1622]: time="2025-03-17T17:39:02.325304460Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 4.329167056s" Mar 17 17:39:02.325763 containerd[1622]: time="2025-03-17T17:39:02.325539926Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 17 17:39:02.329410 containerd[1622]: time="2025-03-17T17:39:02.329300624Z" level=info msg="CreateContainer within sandbox \"79f18c4e3fb6080dba658221d25814b4e14a42915748add4c4e6a3d9a19fcb55\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 17:39:02.348833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456358831.mount: Deactivated successfully. Mar 17 17:39:02.349792 containerd[1622]: time="2025-03-17T17:39:02.348831713Z" level=info msg="CreateContainer within sandbox \"79f18c4e3fb6080dba658221d25814b4e14a42915748add4c4e6a3d9a19fcb55\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7434cb77ca3a4f44da0f174bb792562a87172362effb522e42d848a3e9c1c74e\"" Mar 17 17:39:02.349853 containerd[1622]: time="2025-03-17T17:39:02.349823143Z" level=info msg="StartContainer for \"7434cb77ca3a4f44da0f174bb792562a87172362effb522e42d848a3e9c1c74e\"" Mar 17 17:39:02.403844 containerd[1622]: time="2025-03-17T17:39:02.403804660Z" level=info msg="StartContainer for \"7434cb77ca3a4f44da0f174bb792562a87172362effb522e42d848a3e9c1c74e\" returns successfully" Mar 17 17:39:02.821044 kubelet[3064]: I0317 17:39:02.820955 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dvvtn" podStartSLOduration=5.820774977 podStartE2EDuration="5.820774977s" podCreationTimestamp="2025-03-17 17:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:58.80367551 +0000 UTC m=+15.213977460" watchObservedRunningTime="2025-03-17 17:39:02.820774977 +0000 UTC m=+19.231076927" Mar 17 17:39:02.821754 kubelet[3064]: I0317 17:39:02.821691 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-vt7zv" podStartSLOduration=1.4886330079999999 podStartE2EDuration="5.821490096s" podCreationTimestamp="2025-03-17 17:38:57 +0000 UTC" firstStartedPulling="2025-03-17 17:38:57.993821884 +0000 UTC m=+14.404123834" lastFinishedPulling="2025-03-17 17:39:02.326679012 +0000 UTC m=+18.736980922" observedRunningTime="2025-03-17 17:39:02.821665756 +0000 UTC m=+19.231967706" watchObservedRunningTime="2025-03-17 17:39:02.821490096 +0000 UTC m=+19.231792046" Mar 17 17:39:06.691855 kubelet[3064]: I0317 17:39:06.691779 3064 topology_manager.go:215] "Topology Admit Handler" podUID="d9ff359f-7048-495b-83db-0e38fd6a638d" podNamespace="calico-system" podName="calico-typha-76568bcb9b-fbk6j" Mar 17 17:39:06.756131 kubelet[3064]: I0317 17:39:06.756070 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d9ff359f-7048-495b-83db-0e38fd6a638d-typha-certs\") pod \"calico-typha-76568bcb9b-fbk6j\" (UID: \"d9ff359f-7048-495b-83db-0e38fd6a638d\") " pod="calico-system/calico-typha-76568bcb9b-fbk6j" Mar 17 17:39:06.756131 kubelet[3064]: I0317 17:39:06.756144 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrs58\" (UniqueName: \"kubernetes.io/projected/d9ff359f-7048-495b-83db-0e38fd6a638d-kube-api-access-xrs58\") pod \"calico-typha-76568bcb9b-fbk6j\" (UID: \"d9ff359f-7048-495b-83db-0e38fd6a638d\") " pod="calico-system/calico-typha-76568bcb9b-fbk6j" Mar 17 17:39:06.756305 kubelet[3064]: I0317 17:39:06.756177 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9ff359f-7048-495b-83db-0e38fd6a638d-tigera-ca-bundle\") pod \"calico-typha-76568bcb9b-fbk6j\" (UID: \"d9ff359f-7048-495b-83db-0e38fd6a638d\") " pod="calico-system/calico-typha-76568bcb9b-fbk6j" Mar 17 17:39:06.793430 kubelet[3064]: I0317 17:39:06.793381 3064 topology_manager.go:215] "Topology Admit Handler" podUID="bc9399cd-0bd2-4a27-b276-0d02810a4031" podNamespace="calico-system" podName="calico-node-xkvw6" Mar 17 17:39:06.857302 kubelet[3064]: I0317 17:39:06.857240 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc9399cd-0bd2-4a27-b276-0d02810a4031-tigera-ca-bundle\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857302 kubelet[3064]: I0317 17:39:06.857294 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-lib-modules\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857494 kubelet[3064]: I0317 17:39:06.857315 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-var-lib-calico\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857494 kubelet[3064]: I0317 17:39:06.857427 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-cni-log-dir\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857494 kubelet[3064]: I0317 17:39:06.857448 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-xtables-lock\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857494 kubelet[3064]: I0317 17:39:06.857468 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-flexvol-driver-host\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857494 kubelet[3064]: I0317 17:39:06.857485 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29wsn\" (UniqueName: \"kubernetes.io/projected/bc9399cd-0bd2-4a27-b276-0d02810a4031-kube-api-access-29wsn\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857721 kubelet[3064]: I0317 17:39:06.857499 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-cni-bin-dir\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857721 kubelet[3064]: I0317 17:39:06.857528 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bc9399cd-0bd2-4a27-b276-0d02810a4031-node-certs\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.857721 kubelet[3064]: I0317 17:39:06.857545 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-var-run-calico\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.858128 kubelet[3064]: I0317 17:39:06.858082 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-cni-net-dir\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.858302 kubelet[3064]: I0317 17:39:06.858140 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bc9399cd-0bd2-4a27-b276-0d02810a4031-policysync\") pod \"calico-node-xkvw6\" (UID: \"bc9399cd-0bd2-4a27-b276-0d02810a4031\") " pod="calico-system/calico-node-xkvw6" Mar 17 17:39:06.926391 kubelet[3064]: I0317 17:39:06.925478 3064 topology_manager.go:215] "Topology Admit Handler" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" podNamespace="calico-system" podName="csi-node-driver-sjhlt" Mar 17 17:39:06.926391 kubelet[3064]: E0317 17:39:06.925760 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:06.959174 kubelet[3064]: I0317 17:39:06.958465 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85a3e01e-db6f-4f8d-a16f-72e5c08e4d07-socket-dir\") pod \"csi-node-driver-sjhlt\" (UID: \"85a3e01e-db6f-4f8d-a16f-72e5c08e4d07\") " pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:06.959174 kubelet[3064]: I0317 17:39:06.958579 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcbh\" (UniqueName: \"kubernetes.io/projected/85a3e01e-db6f-4f8d-a16f-72e5c08e4d07-kube-api-access-kfcbh\") pod \"csi-node-driver-sjhlt\" (UID: \"85a3e01e-db6f-4f8d-a16f-72e5c08e4d07\") " pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:06.959174 kubelet[3064]: I0317 17:39:06.958603 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85a3e01e-db6f-4f8d-a16f-72e5c08e4d07-kubelet-dir\") pod \"csi-node-driver-sjhlt\" (UID: \"85a3e01e-db6f-4f8d-a16f-72e5c08e4d07\") " pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:06.959174 kubelet[3064]: I0317 17:39:06.958620 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85a3e01e-db6f-4f8d-a16f-72e5c08e4d07-registration-dir\") pod \"csi-node-driver-sjhlt\" (UID: \"85a3e01e-db6f-4f8d-a16f-72e5c08e4d07\") " pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:06.959174 kubelet[3064]: I0317 17:39:06.958672 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/85a3e01e-db6f-4f8d-a16f-72e5c08e4d07-varrun\") pod \"csi-node-driver-sjhlt\" (UID: \"85a3e01e-db6f-4f8d-a16f-72e5c08e4d07\") " pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:06.965034 kubelet[3064]: E0317 17:39:06.964983 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:06.965034 kubelet[3064]: W0317 17:39:06.965037 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:06.965191 kubelet[3064]: E0317 17:39:06.965056 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:06.969366 kubelet[3064]: E0317 17:39:06.965454 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:06.969366 kubelet[3064]: W0317 17:39:06.965482 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:06.969366 kubelet[3064]: E0317 17:39:06.965498 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:06.969366 kubelet[3064]: E0317 17:39:06.967649 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:06.969366 kubelet[3064]: W0317 17:39:06.967665 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:06.969366 kubelet[3064]: E0317 17:39:06.967680 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:06.970538 kubelet[3064]: E0317 17:39:06.970507 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:06.970655 kubelet[3064]: W0317 17:39:06.970619 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:06.970655 kubelet[3064]: E0317 17:39:06.970636 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:06.973770 kubelet[3064]: E0317 17:39:06.972155 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:06.973770 kubelet[3064]: W0317 17:39:06.972174 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:06.973770 kubelet[3064]: E0317 17:39:06.972199 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.001217 containerd[1622]: time="2025-03-17T17:39:06.999936562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76568bcb9b-fbk6j,Uid:d9ff359f-7048-495b-83db-0e38fd6a638d,Namespace:calico-system,Attempt:0,}" Mar 17 17:39:07.013264 kubelet[3064]: E0317 17:39:07.013161 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.013264 kubelet[3064]: W0317 17:39:07.013190 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.013264 kubelet[3064]: E0317 17:39:07.013215 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.047415 containerd[1622]: time="2025-03-17T17:39:07.047007194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:07.047415 containerd[1622]: time="2025-03-17T17:39:07.047088522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:07.047415 containerd[1622]: time="2025-03-17T17:39:07.047104964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:07.047415 containerd[1622]: time="2025-03-17T17:39:07.047320586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:07.060298 kubelet[3064]: E0317 17:39:07.060111 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.060298 kubelet[3064]: W0317 17:39:07.060138 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.060298 kubelet[3064]: E0317 17:39:07.060159 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.060925 kubelet[3064]: E0317 17:39:07.060861 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.060925 kubelet[3064]: W0317 17:39:07.060893 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.060925 kubelet[3064]: E0317 17:39:07.060912 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.061512 kubelet[3064]: E0317 17:39:07.061232 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.061512 kubelet[3064]: W0317 17:39:07.061265 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.062298 kubelet[3064]: E0317 17:39:07.061634 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.062298 kubelet[3064]: E0317 17:39:07.061990 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.062298 kubelet[3064]: W0317 17:39:07.062005 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.062298 kubelet[3064]: E0317 17:39:07.062018 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.062484 kubelet[3064]: E0317 17:39:07.062308 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.062484 kubelet[3064]: W0317 17:39:07.062318 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.062484 kubelet[3064]: E0317 17:39:07.062480 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.062550 kubelet[3064]: W0317 17:39:07.062488 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.062550 kubelet[3064]: E0317 17:39:07.062519 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.063264 kubelet[3064]: E0317 17:39:07.062730 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.063264 kubelet[3064]: E0317 17:39:07.062805 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.063264 kubelet[3064]: W0317 17:39:07.062812 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.063264 kubelet[3064]: E0317 17:39:07.062822 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.063264 kubelet[3064]: E0317 17:39:07.063140 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.063264 kubelet[3064]: W0317 17:39:07.063150 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.063264 kubelet[3064]: E0317 17:39:07.063166 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.064568 kubelet[3064]: E0317 17:39:07.064041 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.064568 kubelet[3064]: W0317 17:39:07.064062 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.064568 kubelet[3064]: E0317 17:39:07.064077 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.064568 kubelet[3064]: E0317 17:39:07.064325 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.064568 kubelet[3064]: W0317 17:39:07.064335 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.064568 kubelet[3064]: E0317 17:39:07.064387 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.064568 kubelet[3064]: E0317 17:39:07.064542 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.064568 kubelet[3064]: W0317 17:39:07.064550 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.064568 kubelet[3064]: E0317 17:39:07.064568 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.064845 kubelet[3064]: E0317 17:39:07.064765 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.064845 kubelet[3064]: W0317 17:39:07.064773 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.064845 kubelet[3064]: E0317 17:39:07.064787 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.065075 kubelet[3064]: E0317 17:39:07.065011 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.065075 kubelet[3064]: W0317 17:39:07.065028 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.065477 kubelet[3064]: E0317 17:39:07.065428 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.067181 kubelet[3064]: E0317 17:39:07.067136 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.067181 kubelet[3064]: W0317 17:39:07.067154 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.067181 kubelet[3064]: E0317 17:39:07.067169 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.067584 kubelet[3064]: E0317 17:39:07.067330 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.067584 kubelet[3064]: W0317 17:39:07.067349 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.067584 kubelet[3064]: E0317 17:39:07.067422 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.067584 kubelet[3064]: E0317 17:39:07.067514 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.067584 kubelet[3064]: W0317 17:39:07.067521 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.067584 kubelet[3064]: E0317 17:39:07.067576 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.067827 kubelet[3064]: E0317 17:39:07.067659 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.067827 kubelet[3064]: W0317 17:39:07.067665 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.067827 kubelet[3064]: E0317 17:39:07.067745 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.068100 kubelet[3064]: E0317 17:39:07.068060 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.068100 kubelet[3064]: W0317 17:39:07.068081 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.068100 kubelet[3064]: E0317 17:39:07.068096 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.068433 kubelet[3064]: E0317 17:39:07.068410 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.068433 kubelet[3064]: W0317 17:39:07.068426 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.068517 kubelet[3064]: E0317 17:39:07.068444 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.068688 kubelet[3064]: E0317 17:39:07.068670 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.068688 kubelet[3064]: W0317 17:39:07.068686 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.068746 kubelet[3064]: E0317 17:39:07.068703 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.069169 kubelet[3064]: E0317 17:39:07.069151 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.069169 kubelet[3064]: W0317 17:39:07.069168 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.069707 kubelet[3064]: E0317 17:39:07.069257 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.070095 kubelet[3064]: E0317 17:39:07.070052 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.070095 kubelet[3064]: W0317 17:39:07.070089 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.070171 kubelet[3064]: E0317 17:39:07.070109 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.070533 kubelet[3064]: E0317 17:39:07.070286 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.070533 kubelet[3064]: W0317 17:39:07.070299 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.070533 kubelet[3064]: E0317 17:39:07.070314 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.074797 kubelet[3064]: E0317 17:39:07.071282 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.074797 kubelet[3064]: W0317 17:39:07.071298 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.074797 kubelet[3064]: E0317 17:39:07.071814 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.074797 kubelet[3064]: W0317 17:39:07.071829 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.074797 kubelet[3064]: E0317 17:39:07.071840 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.074797 kubelet[3064]: E0317 17:39:07.071864 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.093099 kubelet[3064]: E0317 17:39:07.093066 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:07.093099 kubelet[3064]: W0317 17:39:07.093090 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:07.093099 kubelet[3064]: E0317 17:39:07.093113 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:07.107880 containerd[1622]: time="2025-03-17T17:39:07.107439274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xkvw6,Uid:bc9399cd-0bd2-4a27-b276-0d02810a4031,Namespace:calico-system,Attempt:0,}" Mar 17 17:39:07.148956 containerd[1622]: time="2025-03-17T17:39:07.148839202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76568bcb9b-fbk6j,Uid:d9ff359f-7048-495b-83db-0e38fd6a638d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0328eb5994ae03ece651f4d8f23b23d768d6789f325af276985b583a404b27ed\"" Mar 17 17:39:07.158228 containerd[1622]: time="2025-03-17T17:39:07.158041627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:39:07.166038 containerd[1622]: time="2025-03-17T17:39:07.165637726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:07.166038 containerd[1622]: time="2025-03-17T17:39:07.165706253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:07.166038 containerd[1622]: time="2025-03-17T17:39:07.165718294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:07.166038 containerd[1622]: time="2025-03-17T17:39:07.165807903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:07.233883 containerd[1622]: time="2025-03-17T17:39:07.233740634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xkvw6,Uid:bc9399cd-0bd2-4a27-b276-0d02810a4031,Namespace:calico-system,Attempt:0,} returns sandbox id \"14863ae275185ae4524d1d4462de71dcf8c77a58800371620266e34a07f925d5\"" Mar 17 17:39:07.346606 update_engine[1610]: I20250317 17:39:07.346493 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:39:07.347182 update_engine[1610]: I20250317 17:39:07.346802 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:39:07.347182 update_engine[1610]: I20250317 17:39:07.347103 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:39:07.347971 update_engine[1610]: E20250317 17:39:07.347859 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:39:07.348082 update_engine[1610]: I20250317 17:39:07.348007 1610 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 17:39:08.697872 kubelet[3064]: E0317 17:39:08.697804 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:09.753292 containerd[1622]: time="2025-03-17T17:39:09.753235852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:09.756245 containerd[1622]: time="2025-03-17T17:39:09.756113499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 17 17:39:09.760371 containerd[1622]: time="2025-03-17T17:39:09.757186886Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:09.764578 containerd[1622]: time="2025-03-17T17:39:09.764524536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:09.767368 containerd[1622]: time="2025-03-17T17:39:09.765182962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 2.607079289s" Mar 17 17:39:09.767563 containerd[1622]: time="2025-03-17T17:39:09.767522955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 17 17:39:09.770446 containerd[1622]: time="2025-03-17T17:39:09.770407762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:39:09.797374 containerd[1622]: time="2025-03-17T17:39:09.796223733Z" level=info msg="CreateContainer within sandbox \"0328eb5994ae03ece651f4d8f23b23d768d6789f325af276985b583a404b27ed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:39:09.826687 containerd[1622]: time="2025-03-17T17:39:09.824904949Z" level=info msg="CreateContainer within sandbox \"0328eb5994ae03ece651f4d8f23b23d768d6789f325af276985b583a404b27ed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d82513b4b9560f4af85fa2b49adede4623bce0d99452edc3b44f5f6fbb1c24e6\"" Mar 17 17:39:09.826687 containerd[1622]: time="2025-03-17T17:39:09.825921810Z" level=info msg="StartContainer for \"d82513b4b9560f4af85fa2b49adede4623bce0d99452edc3b44f5f6fbb1c24e6\"" Mar 17 17:39:09.948099 containerd[1622]: time="2025-03-17T17:39:09.947742581Z" level=info msg="StartContainer for \"d82513b4b9560f4af85fa2b49adede4623bce0d99452edc3b44f5f6fbb1c24e6\" returns successfully" Mar 17 17:39:10.698689 kubelet[3064]: E0317 17:39:10.698061 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:10.851560 kubelet[3064]: I0317 17:39:10.851466 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76568bcb9b-fbk6j" podStartSLOduration=2.240610115 podStartE2EDuration="4.851437619s" podCreationTimestamp="2025-03-17 17:39:06 +0000 UTC" firstStartedPulling="2025-03-17 17:39:07.157580979 +0000 UTC m=+23.567882969" lastFinishedPulling="2025-03-17 17:39:09.768408523 +0000 UTC m=+26.178710473" observedRunningTime="2025-03-17 17:39:10.851095945 +0000 UTC m=+27.261397935" watchObservedRunningTime="2025-03-17 17:39:10.851437619 +0000 UTC m=+27.261739609" Mar 17 17:39:10.881797 kubelet[3064]: E0317 17:39:10.881764 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.882199 kubelet[3064]: W0317 17:39:10.881986 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.882199 kubelet[3064]: E0317 17:39:10.882031 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.882565 kubelet[3064]: E0317 17:39:10.882476 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.882565 kubelet[3064]: W0317 17:39:10.882494 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.882565 kubelet[3064]: E0317 17:39:10.882511 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.883063 kubelet[3064]: E0317 17:39:10.882971 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.883063 kubelet[3064]: W0317 17:39:10.882989 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.883063 kubelet[3064]: E0317 17:39:10.883005 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.883693 kubelet[3064]: E0317 17:39:10.883529 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.883693 kubelet[3064]: W0317 17:39:10.883547 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.883693 kubelet[3064]: E0317 17:39:10.883564 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.884044 kubelet[3064]: E0317 17:39:10.883963 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.884044 kubelet[3064]: W0317 17:39:10.883982 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.884044 kubelet[3064]: E0317 17:39:10.883997 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.885229 kubelet[3064]: E0317 17:39:10.885066 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.885229 kubelet[3064]: W0317 17:39:10.885082 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.885229 kubelet[3064]: E0317 17:39:10.885094 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.885813 kubelet[3064]: E0317 17:39:10.885628 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.885813 kubelet[3064]: W0317 17:39:10.885641 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.885813 kubelet[3064]: E0317 17:39:10.885654 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.886040 kubelet[3064]: E0317 17:39:10.885961 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.886040 kubelet[3064]: W0317 17:39:10.885974 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.886040 kubelet[3064]: E0317 17:39:10.885986 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.886395 kubelet[3064]: E0317 17:39:10.886306 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.886395 kubelet[3064]: W0317 17:39:10.886317 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.886395 kubelet[3064]: E0317 17:39:10.886327 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.886777 kubelet[3064]: E0317 17:39:10.886639 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.886777 kubelet[3064]: W0317 17:39:10.886651 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.886777 kubelet[3064]: E0317 17:39:10.886663 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.886989 kubelet[3064]: E0317 17:39:10.886921 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.886989 kubelet[3064]: W0317 17:39:10.886932 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.886989 kubelet[3064]: E0317 17:39:10.886946 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.887317 kubelet[3064]: E0317 17:39:10.887235 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.887317 kubelet[3064]: W0317 17:39:10.887247 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.887317 kubelet[3064]: E0317 17:39:10.887270 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.887745 kubelet[3064]: E0317 17:39:10.887649 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.887745 kubelet[3064]: W0317 17:39:10.887661 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.887745 kubelet[3064]: E0317 17:39:10.887672 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.888204 kubelet[3064]: E0317 17:39:10.888108 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.888204 kubelet[3064]: W0317 17:39:10.888120 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.888204 kubelet[3064]: E0317 17:39:10.888129 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.888608 kubelet[3064]: E0317 17:39:10.888533 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.888608 kubelet[3064]: W0317 17:39:10.888546 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.888608 kubelet[3064]: E0317 17:39:10.888556 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.891137 kubelet[3064]: E0317 17:39:10.891118 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.891385 kubelet[3064]: W0317 17:39:10.891212 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.891385 kubelet[3064]: E0317 17:39:10.891233 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.891842 kubelet[3064]: E0317 17:39:10.891735 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.891842 kubelet[3064]: W0317 17:39:10.891748 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.891842 kubelet[3064]: E0317 17:39:10.891769 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.892091 kubelet[3064]: E0317 17:39:10.892064 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.892136 kubelet[3064]: W0317 17:39:10.892096 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.892172 kubelet[3064]: E0317 17:39:10.892131 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.892526 kubelet[3064]: E0317 17:39:10.892506 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.892589 kubelet[3064]: W0317 17:39:10.892530 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.892589 kubelet[3064]: E0317 17:39:10.892561 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.892769 kubelet[3064]: E0317 17:39:10.892757 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.892816 kubelet[3064]: W0317 17:39:10.892774 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.892816 kubelet[3064]: E0317 17:39:10.892792 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.892968 kubelet[3064]: E0317 17:39:10.892957 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.893432 kubelet[3064]: W0317 17:39:10.892969 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.893432 kubelet[3064]: E0317 17:39:10.893022 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.893432 kubelet[3064]: E0317 17:39:10.893103 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.893432 kubelet[3064]: W0317 17:39:10.893111 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.893432 kubelet[3064]: E0317 17:39:10.893235 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.893432 kubelet[3064]: W0317 17:39:10.893242 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.893432 kubelet[3064]: E0317 17:39:10.893262 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.893633 kubelet[3064]: E0317 17:39:10.893441 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.893633 kubelet[3064]: W0317 17:39:10.893450 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.893633 kubelet[3064]: E0317 17:39:10.893460 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.893633 kubelet[3064]: E0317 17:39:10.893590 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.893633 kubelet[3064]: W0317 17:39:10.893597 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.893633 kubelet[3064]: E0317 17:39:10.893605 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.893786 kubelet[3064]: E0317 17:39:10.893760 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.893786 kubelet[3064]: W0317 17:39:10.893767 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.893786 kubelet[3064]: E0317 17:39:10.893776 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.894015 kubelet[3064]: E0317 17:39:10.893989 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.894323 kubelet[3064]: E0317 17:39:10.894161 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.894323 kubelet[3064]: W0317 17:39:10.894176 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.894323 kubelet[3064]: E0317 17:39:10.894199 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.894685 kubelet[3064]: E0317 17:39:10.894655 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.894685 kubelet[3064]: W0317 17:39:10.894670 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.894970 kubelet[3064]: E0317 17:39:10.894826 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.895086 kubelet[3064]: E0317 17:39:10.895073 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.895243 kubelet[3064]: W0317 17:39:10.895140 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.895243 kubelet[3064]: E0317 17:39:10.895159 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.895424 kubelet[3064]: E0317 17:39:10.895411 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.895502 kubelet[3064]: W0317 17:39:10.895485 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.895561 kubelet[3064]: E0317 17:39:10.895550 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.895898 kubelet[3064]: E0317 17:39:10.895781 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.895898 kubelet[3064]: W0317 17:39:10.895794 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.895898 kubelet[3064]: E0317 17:39:10.895804 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.896093 kubelet[3064]: E0317 17:39:10.896080 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.896162 kubelet[3064]: W0317 17:39:10.896149 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.896216 kubelet[3064]: E0317 17:39:10.896206 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:10.896677 kubelet[3064]: E0317 17:39:10.896623 3064 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:39:10.896677 kubelet[3064]: W0317 17:39:10.896637 3064 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:39:10.896677 kubelet[3064]: E0317 17:39:10.896647 3064 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:39:11.233174 containerd[1622]: time="2025-03-17T17:39:11.232058166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:11.238751 containerd[1622]: time="2025-03-17T17:39:11.236555721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 17 17:39:11.238751 containerd[1622]: time="2025-03-17T17:39:11.237486891Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:11.242117 containerd[1622]: time="2025-03-17T17:39:11.242047893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:11.243581 containerd[1622]: time="2025-03-17T17:39:11.243535197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.473092711s" Mar 17 17:39:11.243581 containerd[1622]: time="2025-03-17T17:39:11.243580361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 17 17:39:11.248097 containerd[1622]: time="2025-03-17T17:39:11.248064355Z" level=info msg="CreateContainer within sandbox \"14863ae275185ae4524d1d4462de71dcf8c77a58800371620266e34a07f925d5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:39:11.264200 containerd[1622]: time="2025-03-17T17:39:11.264144430Z" level=info msg="CreateContainer within sandbox \"14863ae275185ae4524d1d4462de71dcf8c77a58800371620266e34a07f925d5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"77f4da754d16cf91301531db6768c0f8f022ca497f5add416cd2c12e69856129\"" Mar 17 17:39:11.267409 containerd[1622]: time="2025-03-17T17:39:11.266556143Z" level=info msg="StartContainer for \"77f4da754d16cf91301531db6768c0f8f022ca497f5add416cd2c12e69856129\"" Mar 17 17:39:11.336968 containerd[1622]: time="2025-03-17T17:39:11.336554674Z" level=info msg="StartContainer for \"77f4da754d16cf91301531db6768c0f8f022ca497f5add416cd2c12e69856129\" returns successfully" Mar 17 17:39:11.449495 containerd[1622]: time="2025-03-17T17:39:11.449255856Z" level=info msg="shim disconnected" id=77f4da754d16cf91301531db6768c0f8f022ca497f5add416cd2c12e69856129 namespace=k8s.io Mar 17 17:39:11.449495 containerd[1622]: time="2025-03-17T17:39:11.449335824Z" level=warning msg="cleaning up after shim disconnected" id=77f4da754d16cf91301531db6768c0f8f022ca497f5add416cd2c12e69856129 namespace=k8s.io Mar 17 17:39:11.449495 containerd[1622]: time="2025-03-17T17:39:11.449359586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:11.782704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77f4da754d16cf91301531db6768c0f8f022ca497f5add416cd2c12e69856129-rootfs.mount: Deactivated successfully. Mar 17 17:39:11.847853 kubelet[3064]: I0317 17:39:11.847432 3064 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:11.854948 containerd[1622]: time="2025-03-17T17:39:11.854633109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:39:12.697895 kubelet[3064]: E0317 17:39:12.697811 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:14.698432 kubelet[3064]: E0317 17:39:14.698067 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:14.871237 containerd[1622]: time="2025-03-17T17:39:14.870645279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:14.872729 containerd[1622]: time="2025-03-17T17:39:14.872672067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 17 17:39:14.873539 containerd[1622]: time="2025-03-17T17:39:14.873417416Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:14.879272 containerd[1622]: time="2025-03-17T17:39:14.877859988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:14.879272 containerd[1622]: time="2025-03-17T17:39:14.878646341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 3.023973428s" Mar 17 17:39:14.879272 containerd[1622]: time="2025-03-17T17:39:14.878673103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 17 17:39:14.885264 containerd[1622]: time="2025-03-17T17:39:14.885226512Z" level=info msg="CreateContainer within sandbox \"14863ae275185ae4524d1d4462de71dcf8c77a58800371620266e34a07f925d5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:39:14.904029 containerd[1622]: time="2025-03-17T17:39:14.903979412Z" level=info msg="CreateContainer within sandbox \"14863ae275185ae4524d1d4462de71dcf8c77a58800371620266e34a07f925d5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"87c068284e06c84bec06fa05296b9ac53d3864abe92cd7f22290cfe7eafc38d8\"" Mar 17 17:39:14.907052 containerd[1622]: time="2025-03-17T17:39:14.907019694Z" level=info msg="StartContainer for \"87c068284e06c84bec06fa05296b9ac53d3864abe92cd7f22290cfe7eafc38d8\"" Mar 17 17:39:15.007328 containerd[1622]: time="2025-03-17T17:39:15.007228824Z" level=info msg="StartContainer for \"87c068284e06c84bec06fa05296b9ac53d3864abe92cd7f22290cfe7eafc38d8\" returns successfully" Mar 17 17:39:15.497482 containerd[1622]: time="2025-03-17T17:39:15.497384621Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:39:15.526905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87c068284e06c84bec06fa05296b9ac53d3864abe92cd7f22290cfe7eafc38d8-rootfs.mount: Deactivated successfully. Mar 17 17:39:15.603457 kubelet[3064]: I0317 17:39:15.601451 3064 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:39:15.618919 containerd[1622]: time="2025-03-17T17:39:15.618854182Z" level=info msg="shim disconnected" id=87c068284e06c84bec06fa05296b9ac53d3864abe92cd7f22290cfe7eafc38d8 namespace=k8s.io Mar 17 17:39:15.618919 containerd[1622]: time="2025-03-17T17:39:15.618910587Z" level=warning msg="cleaning up after shim disconnected" id=87c068284e06c84bec06fa05296b9ac53d3864abe92cd7f22290cfe7eafc38d8 namespace=k8s.io Mar 17 17:39:15.618919 containerd[1622]: time="2025-03-17T17:39:15.618919028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:15.652920 kubelet[3064]: I0317 17:39:15.648965 3064 topology_manager.go:215] "Topology Admit Handler" podUID="ec122e0b-85ab-491a-9aaa-6410b3df3402" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:15.652920 kubelet[3064]: I0317 17:39:15.651018 3064 topology_manager.go:215] "Topology Admit Handler" podUID="e0f85a4a-625a-454c-bfe1-41a08087ea5f" podNamespace="calico-system" podName="calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:15.652920 kubelet[3064]: I0317 17:39:15.651153 3064 topology_manager.go:215] "Topology Admit Handler" podUID="e34046eb-2ab6-4ffc-b664-99e06f707e68" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:15.660354 kubelet[3064]: I0317 17:39:15.658981 3064 topology_manager.go:215] "Topology Admit Handler" podUID="aad6eeea-6774-44b9-911a-6a172c0b0cc4" podNamespace="calico-apiserver" podName="calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:15.660354 kubelet[3064]: I0317 17:39:15.659140 3064 topology_manager.go:215] "Topology Admit Handler" podUID="734e536d-3518-4cef-9dfc-3553408c92a2" podNamespace="calico-apiserver" podName="calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:15.728363 kubelet[3064]: I0317 17:39:15.728128 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwnr4\" (UniqueName: \"kubernetes.io/projected/e0f85a4a-625a-454c-bfe1-41a08087ea5f-kube-api-access-cwnr4\") pod \"calico-kube-controllers-6b68c854f8-zw5m5\" (UID: \"e0f85a4a-625a-454c-bfe1-41a08087ea5f\") " pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:15.728934 kubelet[3064]: I0317 17:39:15.728868 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfkcc\" (UniqueName: \"kubernetes.io/projected/aad6eeea-6774-44b9-911a-6a172c0b0cc4-kube-api-access-rfkcc\") pod \"calico-apiserver-8bd6cf464-82tgf\" (UID: \"aad6eeea-6774-44b9-911a-6a172c0b0cc4\") " pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:15.729072 kubelet[3064]: I0317 17:39:15.729057 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zckmf\" (UniqueName: \"kubernetes.io/projected/734e536d-3518-4cef-9dfc-3553408c92a2-kube-api-access-zckmf\") pod \"calico-apiserver-8bd6cf464-lpb8f\" (UID: \"734e536d-3518-4cef-9dfc-3553408c92a2\") " pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:15.729214 kubelet[3064]: I0317 17:39:15.729199 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec122e0b-85ab-491a-9aaa-6410b3df3402-config-volume\") pod \"coredns-7db6d8ff4d-nwmf6\" (UID: \"ec122e0b-85ab-491a-9aaa-6410b3df3402\") " pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:15.729318 kubelet[3064]: I0317 17:39:15.729305 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f2fw\" (UniqueName: \"kubernetes.io/projected/ec122e0b-85ab-491a-9aaa-6410b3df3402-kube-api-access-7f2fw\") pod \"coredns-7db6d8ff4d-nwmf6\" (UID: \"ec122e0b-85ab-491a-9aaa-6410b3df3402\") " pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:15.729450 kubelet[3064]: I0317 17:39:15.729424 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhrpk\" (UniqueName: \"kubernetes.io/projected/e34046eb-2ab6-4ffc-b664-99e06f707e68-kube-api-access-jhrpk\") pod \"coredns-7db6d8ff4d-7t9bj\" (UID: \"e34046eb-2ab6-4ffc-b664-99e06f707e68\") " pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:15.729582 kubelet[3064]: I0317 17:39:15.729567 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e34046eb-2ab6-4ffc-b664-99e06f707e68-config-volume\") pod \"coredns-7db6d8ff4d-7t9bj\" (UID: \"e34046eb-2ab6-4ffc-b664-99e06f707e68\") " pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:15.729720 kubelet[3064]: I0317 17:39:15.729706 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aad6eeea-6774-44b9-911a-6a172c0b0cc4-calico-apiserver-certs\") pod \"calico-apiserver-8bd6cf464-82tgf\" (UID: \"aad6eeea-6774-44b9-911a-6a172c0b0cc4\") " pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:15.729887 kubelet[3064]: I0317 17:39:15.729850 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f85a4a-625a-454c-bfe1-41a08087ea5f-tigera-ca-bundle\") pod \"calico-kube-controllers-6b68c854f8-zw5m5\" (UID: \"e0f85a4a-625a-454c-bfe1-41a08087ea5f\") " pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:15.729934 kubelet[3064]: I0317 17:39:15.729917 3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/734e536d-3518-4cef-9dfc-3553408c92a2-calico-apiserver-certs\") pod \"calico-apiserver-8bd6cf464-lpb8f\" (UID: \"734e536d-3518-4cef-9dfc-3553408c92a2\") " pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:15.864036 containerd[1622]: time="2025-03-17T17:39:15.863816010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:39:15.965025 containerd[1622]: time="2025-03-17T17:39:15.964799096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:0,}" Mar 17 17:39:15.973125 containerd[1622]: time="2025-03-17T17:39:15.972893797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:0,}" Mar 17 17:39:15.984377 containerd[1622]: time="2025-03-17T17:39:15.982757420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:39:15.985223 containerd[1622]: time="2025-03-17T17:39:15.985178722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:39:15.985523 containerd[1622]: time="2025-03-17T17:39:15.985497951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:0,}" Mar 17 17:39:16.175886 containerd[1622]: time="2025-03-17T17:39:16.175557504Z" level=error msg="Failed to destroy network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.177955 containerd[1622]: time="2025-03-17T17:39:16.177122046Z" level=error msg="encountered an error cleaning up failed sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.177955 containerd[1622]: time="2025-03-17T17:39:16.177200453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.178390 kubelet[3064]: E0317 17:39:16.177413 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.178390 kubelet[3064]: E0317 17:39:16.177487 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:16.178390 kubelet[3064]: E0317 17:39:16.177507 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:16.178703 kubelet[3064]: E0317 17:39:16.177551 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" podUID="aad6eeea-6774-44b9-911a-6a172c0b0cc4" Mar 17 17:39:16.193537 containerd[1622]: time="2025-03-17T17:39:16.193443841Z" level=error msg="Failed to destroy network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.193820 containerd[1622]: time="2025-03-17T17:39:16.193791872Z" level=error msg="encountered an error cleaning up failed sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.193886 containerd[1622]: time="2025-03-17T17:39:16.193863719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.194046 containerd[1622]: time="2025-03-17T17:39:16.194023853Z" level=error msg="Failed to destroy network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.194298 kubelet[3064]: E0317 17:39:16.194252 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.194474 kubelet[3064]: E0317 17:39:16.194314 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:16.194474 kubelet[3064]: E0317 17:39:16.194336 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:16.194474 kubelet[3064]: E0317 17:39:16.194398 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nwmf6" podUID="ec122e0b-85ab-491a-9aaa-6410b3df3402" Mar 17 17:39:16.196632 containerd[1622]: time="2025-03-17T17:39:16.196310820Z" level=error msg="encountered an error cleaning up failed sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.197038 containerd[1622]: time="2025-03-17T17:39:16.196609287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.197442 kubelet[3064]: E0317 17:39:16.197300 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.197442 kubelet[3064]: E0317 17:39:16.197375 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:16.197442 kubelet[3064]: E0317 17:39:16.197397 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:16.197566 kubelet[3064]: E0317 17:39:16.197437 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7t9bj" podUID="e34046eb-2ab6-4ffc-b664-99e06f707e68" Mar 17 17:39:16.213044 containerd[1622]: time="2025-03-17T17:39:16.212757906Z" level=error msg="Failed to destroy network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.214009 containerd[1622]: time="2025-03-17T17:39:16.213941933Z" level=error msg="encountered an error cleaning up failed sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.214661 containerd[1622]: time="2025-03-17T17:39:16.214130790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.214789 kubelet[3064]: E0317 17:39:16.214387 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.214789 kubelet[3064]: E0317 17:39:16.214520 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:16.214789 kubelet[3064]: E0317 17:39:16.214542 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:16.214929 kubelet[3064]: E0317 17:39:16.214592 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" podUID="734e536d-3518-4cef-9dfc-3553408c92a2" Mar 17 17:39:16.218734 containerd[1622]: time="2025-03-17T17:39:16.218098909Z" level=error msg="Failed to destroy network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.218734 containerd[1622]: time="2025-03-17T17:39:16.218650198Z" level=error msg="encountered an error cleaning up failed sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.219109 containerd[1622]: time="2025-03-17T17:39:16.218977828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.219702 kubelet[3064]: E0317 17:39:16.219663 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.219820 kubelet[3064]: E0317 17:39:16.219748 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:16.219820 kubelet[3064]: E0317 17:39:16.219770 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:16.219885 kubelet[3064]: E0317 17:39:16.219828 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" podUID="e0f85a4a-625a-454c-bfe1-41a08087ea5f" Mar 17 17:39:16.702195 containerd[1622]: time="2025-03-17T17:39:16.702147129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:0,}" Mar 17 17:39:16.765370 containerd[1622]: time="2025-03-17T17:39:16.765316157Z" level=error msg="Failed to destroy network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.765822 containerd[1622]: time="2025-03-17T17:39:16.765791400Z" level=error msg="encountered an error cleaning up failed sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.765962 containerd[1622]: time="2025-03-17T17:39:16.765940733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.766309 kubelet[3064]: E0317 17:39:16.766202 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:16.766309 kubelet[3064]: E0317 17:39:16.766281 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:16.766309 kubelet[3064]: E0317 17:39:16.766304 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:16.768263 kubelet[3064]: E0317 17:39:16.766429 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:16.867393 kubelet[3064]: I0317 17:39:16.865926 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88" Mar 17 17:39:16.867871 containerd[1622]: time="2025-03-17T17:39:16.867826900Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" Mar 17 17:39:16.868108 containerd[1622]: time="2025-03-17T17:39:16.868080963Z" level=info msg="Ensure that sandbox fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88 in task-service has been cleanup successfully" Mar 17 17:39:16.868528 containerd[1622]: time="2025-03-17T17:39:16.868469878Z" level=info msg="TearDown network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" successfully" Mar 17 17:39:16.868528 containerd[1622]: time="2025-03-17T17:39:16.868488880Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" returns successfully" Mar 17 17:39:16.870074 containerd[1622]: time="2025-03-17T17:39:16.870017538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:1,}" Mar 17 17:39:16.871259 kubelet[3064]: I0317 17:39:16.870527 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e" Mar 17 17:39:16.872720 containerd[1622]: time="2025-03-17T17:39:16.872689699Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" Mar 17 17:39:16.873453 containerd[1622]: time="2025-03-17T17:39:16.873027490Z" level=info msg="Ensure that sandbox 1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e in task-service has been cleanup successfully" Mar 17 17:39:16.874510 containerd[1622]: time="2025-03-17T17:39:16.873830882Z" level=info msg="TearDown network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" successfully" Mar 17 17:39:16.874510 containerd[1622]: time="2025-03-17T17:39:16.873852564Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" returns successfully" Mar 17 17:39:16.874674 containerd[1622]: time="2025-03-17T17:39:16.874655317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:1,}" Mar 17 17:39:16.875383 kubelet[3064]: I0317 17:39:16.874896 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad" Mar 17 17:39:16.876082 containerd[1622]: time="2025-03-17T17:39:16.876058284Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" Mar 17 17:39:16.876577 containerd[1622]: time="2025-03-17T17:39:16.876453679Z" level=info msg="Ensure that sandbox c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad in task-service has been cleanup successfully" Mar 17 17:39:16.876577 containerd[1622]: time="2025-03-17T17:39:16.876614694Z" level=info msg="TearDown network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" successfully" Mar 17 17:39:16.876577 containerd[1622]: time="2025-03-17T17:39:16.876627815Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" returns successfully" Mar 17 17:39:16.877460 containerd[1622]: time="2025-03-17T17:39:16.877264193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:39:16.878748 kubelet[3064]: I0317 17:39:16.878723 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935" Mar 17 17:39:16.879610 containerd[1622]: time="2025-03-17T17:39:16.879583442Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" Mar 17 17:39:16.879796 containerd[1622]: time="2025-03-17T17:39:16.879765219Z" level=info msg="Ensure that sandbox 2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935 in task-service has been cleanup successfully" Mar 17 17:39:16.879937 containerd[1622]: time="2025-03-17T17:39:16.879913792Z" level=info msg="TearDown network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" successfully" Mar 17 17:39:16.879974 containerd[1622]: time="2025-03-17T17:39:16.879959916Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" returns successfully" Mar 17 17:39:16.881750 containerd[1622]: time="2025-03-17T17:39:16.881645629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:1,}" Mar 17 17:39:16.883002 kubelet[3064]: I0317 17:39:16.882850 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14" Mar 17 17:39:16.884875 containerd[1622]: time="2025-03-17T17:39:16.884846478Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" Mar 17 17:39:16.885296 containerd[1622]: time="2025-03-17T17:39:16.885158946Z" level=info msg="Ensure that sandbox 4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14 in task-service has been cleanup successfully" Mar 17 17:39:16.885476 containerd[1622]: time="2025-03-17T17:39:16.885421250Z" level=info msg="TearDown network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" successfully" Mar 17 17:39:16.885476 containerd[1622]: time="2025-03-17T17:39:16.885441012Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" returns successfully" Mar 17 17:39:16.887163 kubelet[3064]: I0317 17:39:16.887144 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f" Mar 17 17:39:16.890565 containerd[1622]: time="2025-03-17T17:39:16.889503139Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" Mar 17 17:39:16.890565 containerd[1622]: time="2025-03-17T17:39:16.889687955Z" level=info msg="Ensure that sandbox f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f in task-service has been cleanup successfully" Mar 17 17:39:16.890565 containerd[1622]: time="2025-03-17T17:39:16.890433903Z" level=info msg="TearDown network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" successfully" Mar 17 17:39:16.890565 containerd[1622]: time="2025-03-17T17:39:16.890459465Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" returns successfully" Mar 17 17:39:16.891366 containerd[1622]: time="2025-03-17T17:39:16.891213213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:1,}" Mar 17 17:39:16.894776 containerd[1622]: time="2025-03-17T17:39:16.894726451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:39:16.901446 systemd[1]: run-netns-cni\x2d9062fcd3\x2dc63a\x2d4baa\x2dd39c\x2d221b2a992b89.mount: Deactivated successfully. Mar 17 17:39:16.902109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88-shm.mount: Deactivated successfully. Mar 17 17:39:16.902497 systemd[1]: run-netns-cni\x2dfb341320\x2d9d6c\x2d2bef\x2d6bf8\x2d256f64ca07c0.mount: Deactivated successfully. Mar 17 17:39:16.904164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e-shm.mount: Deactivated successfully. Mar 17 17:39:17.093617 containerd[1622]: time="2025-03-17T17:39:17.093513427Z" level=error msg="Failed to destroy network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.096378 containerd[1622]: time="2025-03-17T17:39:17.095707743Z" level=error msg="encountered an error cleaning up failed sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.096510 containerd[1622]: time="2025-03-17T17:39:17.096421606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.097297 kubelet[3064]: E0317 17:39:17.096928 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.097297 kubelet[3064]: E0317 17:39:17.097001 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:17.097297 kubelet[3064]: E0317 17:39:17.097026 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:17.097526 kubelet[3064]: E0317 17:39:17.097067 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" podUID="aad6eeea-6774-44b9-911a-6a172c0b0cc4" Mar 17 17:39:17.117498 containerd[1622]: time="2025-03-17T17:39:17.117452963Z" level=error msg="Failed to destroy network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.118063 containerd[1622]: time="2025-03-17T17:39:17.118021613Z" level=error msg="encountered an error cleaning up failed sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.118231 containerd[1622]: time="2025-03-17T17:39:17.118210030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.118918 kubelet[3064]: E0317 17:39:17.118656 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.118918 kubelet[3064]: E0317 17:39:17.118863 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:17.118918 kubelet[3064]: E0317 17:39:17.118888 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:17.119740 kubelet[3064]: E0317 17:39:17.119138 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:17.123520 containerd[1622]: time="2025-03-17T17:39:17.122849884Z" level=error msg="Failed to destroy network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.125241 containerd[1622]: time="2025-03-17T17:39:17.125197613Z" level=error msg="encountered an error cleaning up failed sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.125368 containerd[1622]: time="2025-03-17T17:39:17.125270700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.126088 kubelet[3064]: E0317 17:39:17.125684 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.126088 kubelet[3064]: E0317 17:39:17.125744 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:17.126088 kubelet[3064]: E0317 17:39:17.125763 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:17.126228 kubelet[3064]: E0317 17:39:17.125854 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nwmf6" podUID="ec122e0b-85ab-491a-9aaa-6410b3df3402" Mar 17 17:39:17.135800 containerd[1622]: time="2025-03-17T17:39:17.135690830Z" level=error msg="Failed to destroy network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.139067 containerd[1622]: time="2025-03-17T17:39:17.137283532Z" level=error msg="encountered an error cleaning up failed sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.139560 containerd[1622]: time="2025-03-17T17:39:17.139178941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.140250 kubelet[3064]: E0317 17:39:17.139748 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.140250 kubelet[3064]: E0317 17:39:17.139816 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:17.140250 kubelet[3064]: E0317 17:39:17.139836 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:17.140477 kubelet[3064]: E0317 17:39:17.139899 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7t9bj" podUID="e34046eb-2ab6-4ffc-b664-99e06f707e68" Mar 17 17:39:17.154559 containerd[1622]: time="2025-03-17T17:39:17.154374096Z" level=error msg="Failed to destroy network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.154849 containerd[1622]: time="2025-03-17T17:39:17.154680924Z" level=error msg="encountered an error cleaning up failed sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.154849 containerd[1622]: time="2025-03-17T17:39:17.154738889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.155436 kubelet[3064]: E0317 17:39:17.155075 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.155436 kubelet[3064]: E0317 17:39:17.155129 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:17.155436 kubelet[3064]: E0317 17:39:17.155146 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:17.155579 kubelet[3064]: E0317 17:39:17.155185 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" podUID="734e536d-3518-4cef-9dfc-3553408c92a2" Mar 17 17:39:17.158905 containerd[1622]: time="2025-03-17T17:39:17.158830534Z" level=error msg="Failed to destroy network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.159529 containerd[1622]: time="2025-03-17T17:39:17.159488752Z" level=error msg="encountered an error cleaning up failed sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.159605 containerd[1622]: time="2025-03-17T17:39:17.159581401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.160053 kubelet[3064]: E0317 17:39:17.159954 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:17.160053 kubelet[3064]: E0317 17:39:17.160022 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:17.160268 kubelet[3064]: E0317 17:39:17.160041 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:17.160268 kubelet[3064]: E0317 17:39:17.160211 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" podUID="e0f85a4a-625a-454c-bfe1-41a08087ea5f" Mar 17 17:39:17.347363 update_engine[1610]: I20250317 17:39:17.347198 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:39:17.347982 update_engine[1610]: I20250317 17:39:17.347509 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:39:17.347982 update_engine[1610]: I20250317 17:39:17.347754 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:39:17.349202 update_engine[1610]: E20250317 17:39:17.348152 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:39:17.349202 update_engine[1610]: I20250317 17:39:17.348224 1610 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 17:39:17.891232 kubelet[3064]: I0317 17:39:17.891203 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70" Mar 17 17:39:17.892179 containerd[1622]: time="2025-03-17T17:39:17.892136069Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\"" Mar 17 17:39:17.893570 containerd[1622]: time="2025-03-17T17:39:17.892313365Z" level=info msg="Ensure that sandbox 06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70 in task-service has been cleanup successfully" Mar 17 17:39:17.893570 containerd[1622]: time="2025-03-17T17:39:17.892531104Z" level=info msg="TearDown network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" successfully" Mar 17 17:39:17.893570 containerd[1622]: time="2025-03-17T17:39:17.892546986Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" returns successfully" Mar 17 17:39:17.893570 containerd[1622]: time="2025-03-17T17:39:17.893314334Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" Mar 17 17:39:17.893570 containerd[1622]: time="2025-03-17T17:39:17.893414263Z" level=info msg="TearDown network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" successfully" Mar 17 17:39:17.893570 containerd[1622]: time="2025-03-17T17:39:17.893425424Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" returns successfully" Mar 17 17:39:17.895921 containerd[1622]: time="2025-03-17T17:39:17.894447955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:2,}" Mar 17 17:39:17.902376 kubelet[3064]: I0317 17:39:17.898628 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28" Mar 17 17:39:17.902101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231-shm.mount: Deactivated successfully. Mar 17 17:39:17.902242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea-shm.mount: Deactivated successfully. Mar 17 17:39:17.902327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28-shm.mount: Deactivated successfully. Mar 17 17:39:17.902736 systemd[1]: run-netns-cni\x2d64b9083f\x2d12bc\x2d65ea\x2df6b1\x2da89f47bf4efc.mount: Deactivated successfully. Mar 17 17:39:17.903002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70-shm.mount: Deactivated successfully. Mar 17 17:39:17.903126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0-shm.mount: Deactivated successfully. Mar 17 17:39:17.905761 containerd[1622]: time="2025-03-17T17:39:17.904787278Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\"" Mar 17 17:39:17.908714 kubelet[3064]: I0317 17:39:17.908073 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923" Mar 17 17:39:17.911331 containerd[1622]: time="2025-03-17T17:39:17.908437083Z" level=info msg="Ensure that sandbox 69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28 in task-service has been cleanup successfully" Mar 17 17:39:17.914746 containerd[1622]: time="2025-03-17T17:39:17.911149045Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\"" Mar 17 17:39:17.915556 systemd[1]: run-netns-cni\x2d9eae7743\x2d067a\x2d28da\x2db859\x2d0804a17781dc.mount: Deactivated successfully. Mar 17 17:39:17.917123 containerd[1622]: time="2025-03-17T17:39:17.916969444Z" level=info msg="Ensure that sandbox e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923 in task-service has been cleanup successfully" Mar 17 17:39:17.918301 containerd[1622]: time="2025-03-17T17:39:17.917594100Z" level=info msg="TearDown network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" successfully" Mar 17 17:39:17.918455 containerd[1622]: time="2025-03-17T17:39:17.918427454Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" returns successfully" Mar 17 17:39:17.918642 containerd[1622]: time="2025-03-17T17:39:17.918449936Z" level=info msg="TearDown network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" successfully" Mar 17 17:39:17.918642 containerd[1622]: time="2025-03-17T17:39:17.918601710Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" returns successfully" Mar 17 17:39:17.920929 systemd[1]: run-netns-cni\x2d6e77551b\x2d7ca5\x2d374c\x2d09b9\x2de4c877d5affb.mount: Deactivated successfully. Mar 17 17:39:17.921316 containerd[1622]: time="2025-03-17T17:39:17.921275669Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" Mar 17 17:39:17.921636 containerd[1622]: time="2025-03-17T17:39:17.921612939Z" level=info msg="TearDown network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" successfully" Mar 17 17:39:17.921670 containerd[1622]: time="2025-03-17T17:39:17.921634701Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" returns successfully" Mar 17 17:39:17.921693 containerd[1622]: time="2025-03-17T17:39:17.921671184Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" Mar 17 17:39:17.921758 containerd[1622]: time="2025-03-17T17:39:17.921726989Z" level=info msg="TearDown network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" successfully" Mar 17 17:39:17.923133 containerd[1622]: time="2025-03-17T17:39:17.923085350Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" returns successfully" Mar 17 17:39:17.927519 containerd[1622]: time="2025-03-17T17:39:17.925721945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:2,}" Mar 17 17:39:17.927683 containerd[1622]: time="2025-03-17T17:39:17.927652197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:2,}" Mar 17 17:39:17.929574 kubelet[3064]: I0317 17:39:17.928036 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea" Mar 17 17:39:17.931359 containerd[1622]: time="2025-03-17T17:39:17.930654505Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\"" Mar 17 17:39:17.931359 containerd[1622]: time="2025-03-17T17:39:17.930841562Z" level=info msg="Ensure that sandbox 97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea in task-service has been cleanup successfully" Mar 17 17:39:17.937101 systemd[1]: run-netns-cni\x2de83b8f07\x2d1f07\x2d8ad6\x2dc69f\x2d8d2e210cc9bd.mount: Deactivated successfully. Mar 17 17:39:17.941781 kubelet[3064]: I0317 17:39:17.941628 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231" Mar 17 17:39:17.942194 containerd[1622]: time="2025-03-17T17:39:17.941975555Z" level=info msg="TearDown network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" successfully" Mar 17 17:39:17.942194 containerd[1622]: time="2025-03-17T17:39:17.942127009Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" returns successfully" Mar 17 17:39:17.943598 containerd[1622]: time="2025-03-17T17:39:17.943557776Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" Mar 17 17:39:17.943727 containerd[1622]: time="2025-03-17T17:39:17.943656665Z" level=info msg="TearDown network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" successfully" Mar 17 17:39:17.943727 containerd[1622]: time="2025-03-17T17:39:17.943666546Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" returns successfully" Mar 17 17:39:17.943988 containerd[1622]: time="2025-03-17T17:39:17.943891686Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\"" Mar 17 17:39:17.944089 containerd[1622]: time="2025-03-17T17:39:17.944039979Z" level=info msg="Ensure that sandbox e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231 in task-service has been cleanup successfully" Mar 17 17:39:17.946727 containerd[1622]: time="2025-03-17T17:39:17.946219054Z" level=info msg="TearDown network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" successfully" Mar 17 17:39:17.946727 containerd[1622]: time="2025-03-17T17:39:17.946369787Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" returns successfully" Mar 17 17:39:17.949411 containerd[1622]: time="2025-03-17T17:39:17.949120792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:39:17.950706 containerd[1622]: time="2025-03-17T17:39:17.949178878Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" Mar 17 17:39:17.950969 containerd[1622]: time="2025-03-17T17:39:17.950811063Z" level=info msg="TearDown network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" successfully" Mar 17 17:39:17.950969 containerd[1622]: time="2025-03-17T17:39:17.950830065Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" returns successfully" Mar 17 17:39:17.952300 containerd[1622]: time="2025-03-17T17:39:17.952168024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:39:17.953648 kubelet[3064]: I0317 17:39:17.953558 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0" Mar 17 17:39:17.955870 containerd[1622]: time="2025-03-17T17:39:17.955574048Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\"" Mar 17 17:39:17.955870 containerd[1622]: time="2025-03-17T17:39:17.955734502Z" level=info msg="Ensure that sandbox aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0 in task-service has been cleanup successfully" Mar 17 17:39:17.957817 containerd[1622]: time="2025-03-17T17:39:17.957680316Z" level=info msg="TearDown network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" successfully" Mar 17 17:39:17.957989 containerd[1622]: time="2025-03-17T17:39:17.957972382Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" returns successfully" Mar 17 17:39:17.958590 containerd[1622]: time="2025-03-17T17:39:17.958565235Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" Mar 17 17:39:17.960029 containerd[1622]: time="2025-03-17T17:39:17.959989922Z" level=info msg="TearDown network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" successfully" Mar 17 17:39:17.960136 containerd[1622]: time="2025-03-17T17:39:17.960121894Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" returns successfully" Mar 17 17:39:17.961049 containerd[1622]: time="2025-03-17T17:39:17.961016214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:2,}" Mar 17 17:39:18.150875 containerd[1622]: time="2025-03-17T17:39:18.150676045Z" level=error msg="Failed to destroy network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.155721 containerd[1622]: time="2025-03-17T17:39:18.155181562Z" level=error msg="encountered an error cleaning up failed sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.155721 containerd[1622]: time="2025-03-17T17:39:18.155296252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.157720 kubelet[3064]: E0317 17:39:18.157679 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.157932 kubelet[3064]: E0317 17:39:18.157911 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:18.158102 kubelet[3064]: E0317 17:39:18.158082 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:18.158227 kubelet[3064]: E0317 17:39:18.158200 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nwmf6" podUID="ec122e0b-85ab-491a-9aaa-6410b3df3402" Mar 17 17:39:18.194978 containerd[1622]: time="2025-03-17T17:39:18.194901661Z" level=error msg="Failed to destroy network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.195273 containerd[1622]: time="2025-03-17T17:39:18.195241051Z" level=error msg="encountered an error cleaning up failed sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.195330 containerd[1622]: time="2025-03-17T17:39:18.195310097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.195670 kubelet[3064]: E0317 17:39:18.195636 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.195670 kubelet[3064]: E0317 17:39:18.195691 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:18.195950 kubelet[3064]: E0317 17:39:18.195710 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:18.195950 kubelet[3064]: E0317 17:39:18.195756 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:18.209877 containerd[1622]: time="2025-03-17T17:39:18.209827536Z" level=error msg="Failed to destroy network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.214433 containerd[1622]: time="2025-03-17T17:39:18.211773267Z" level=error msg="encountered an error cleaning up failed sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.214557 containerd[1622]: time="2025-03-17T17:39:18.214471505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.215484 kubelet[3064]: E0317 17:39:18.215354 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.215484 kubelet[3064]: E0317 17:39:18.215439 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:18.215484 kubelet[3064]: E0317 17:39:18.215469 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:18.215639 kubelet[3064]: E0317 17:39:18.215530 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" podUID="734e536d-3518-4cef-9dfc-3553408c92a2" Mar 17 17:39:18.227541 containerd[1622]: time="2025-03-17T17:39:18.227302995Z" level=error msg="Failed to destroy network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.230069 containerd[1622]: time="2025-03-17T17:39:18.229846179Z" level=error msg="encountered an error cleaning up failed sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.230069 containerd[1622]: time="2025-03-17T17:39:18.229968070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.230244 kubelet[3064]: E0317 17:39:18.230178 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.230244 kubelet[3064]: E0317 17:39:18.230239 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:18.230318 kubelet[3064]: E0317 17:39:18.230257 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:18.230318 kubelet[3064]: E0317 17:39:18.230295 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" podUID="aad6eeea-6774-44b9-911a-6a172c0b0cc4" Mar 17 17:39:18.242231 containerd[1622]: time="2025-03-17T17:39:18.242110779Z" level=error msg="Failed to destroy network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.242749 containerd[1622]: time="2025-03-17T17:39:18.242584101Z" level=error msg="encountered an error cleaning up failed sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.242749 containerd[1622]: time="2025-03-17T17:39:18.242648227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.243245 kubelet[3064]: E0317 17:39:18.243080 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.243245 kubelet[3064]: E0317 17:39:18.243154 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:18.243245 kubelet[3064]: E0317 17:39:18.243172 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:18.243394 kubelet[3064]: E0317 17:39:18.243213 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" podUID="e0f85a4a-625a-454c-bfe1-41a08087ea5f" Mar 17 17:39:18.259598 containerd[1622]: time="2025-03-17T17:39:18.259361539Z" level=error msg="Failed to destroy network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.260371 containerd[1622]: time="2025-03-17T17:39:18.260269379Z" level=error msg="encountered an error cleaning up failed sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.260440 containerd[1622]: time="2025-03-17T17:39:18.260401190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.260858 kubelet[3064]: E0317 17:39:18.260802 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:18.260956 kubelet[3064]: E0317 17:39:18.260876 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:18.260956 kubelet[3064]: E0317 17:39:18.260897 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:18.261016 kubelet[3064]: E0317 17:39:18.260967 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7t9bj" podUID="e34046eb-2ab6-4ffc-b664-99e06f707e68" Mar 17 17:39:18.902330 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b-shm.mount: Deactivated successfully. Mar 17 17:39:18.903420 systemd[1]: run-netns-cni\x2dc9eb557d\x2d2f0b\x2d44f5\x2d1f8c\x2dbdac43f87787.mount: Deactivated successfully. Mar 17 17:39:18.903510 systemd[1]: run-netns-cni\x2d5b768a0c\x2d04bd\x2d2660\x2d71dd\x2da1bbc94a5bc1.mount: Deactivated successfully. Mar 17 17:39:18.960544 kubelet[3064]: I0317 17:39:18.960374 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6" Mar 17 17:39:18.962807 containerd[1622]: time="2025-03-17T17:39:18.961844017Z" level=info msg="StopPodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\"" Mar 17 17:39:18.962807 containerd[1622]: time="2025-03-17T17:39:18.962093519Z" level=info msg="Ensure that sandbox d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6 in task-service has been cleanup successfully" Mar 17 17:39:18.966986 containerd[1622]: time="2025-03-17T17:39:18.966517549Z" level=info msg="TearDown network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" successfully" Mar 17 17:39:18.966986 containerd[1622]: time="2025-03-17T17:39:18.966829256Z" level=info msg="StopPodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" returns successfully" Mar 17 17:39:18.967916 containerd[1622]: time="2025-03-17T17:39:18.967668010Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\"" Mar 17 17:39:18.967916 containerd[1622]: time="2025-03-17T17:39:18.967804582Z" level=info msg="TearDown network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" successfully" Mar 17 17:39:18.967916 containerd[1622]: time="2025-03-17T17:39:18.967828864Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" returns successfully" Mar 17 17:39:18.970045 containerd[1622]: time="2025-03-17T17:39:18.968276264Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" Mar 17 17:39:18.970045 containerd[1622]: time="2025-03-17T17:39:18.968405475Z" level=info msg="TearDown network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" successfully" Mar 17 17:39:18.970045 containerd[1622]: time="2025-03-17T17:39:18.968417196Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" returns successfully" Mar 17 17:39:18.970045 containerd[1622]: time="2025-03-17T17:39:18.969126739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:39:18.969162 systemd[1]: run-netns-cni\x2dcb0e0f9b\x2ddf0e\x2d5d91\x2d297c\x2d1eb0a86b70f3.mount: Deactivated successfully. Mar 17 17:39:18.970288 kubelet[3064]: I0317 17:39:18.970202 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0" Mar 17 17:39:18.976658 containerd[1622]: time="2025-03-17T17:39:18.976136036Z" level=info msg="StopPodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\"" Mar 17 17:39:18.981661 containerd[1622]: time="2025-03-17T17:39:18.980571147Z" level=info msg="Ensure that sandbox 67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0 in task-service has been cleanup successfully" Mar 17 17:39:18.984405 containerd[1622]: time="2025-03-17T17:39:18.983938603Z" level=info msg="TearDown network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" successfully" Mar 17 17:39:18.984405 containerd[1622]: time="2025-03-17T17:39:18.984007650Z" level=info msg="StopPodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" returns successfully" Mar 17 17:39:18.986333 systemd[1]: run-netns-cni\x2d706aa030\x2d8cf8\x2dab08\x2d6ed5\x2d72f0fc4d9a0b.mount: Deactivated successfully. Mar 17 17:39:18.987506 containerd[1622]: time="2025-03-17T17:39:18.987120084Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\"" Mar 17 17:39:18.987506 containerd[1622]: time="2025-03-17T17:39:18.987232854Z" level=info msg="TearDown network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" successfully" Mar 17 17:39:18.987506 containerd[1622]: time="2025-03-17T17:39:18.987243375Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" returns successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.989717392Z" level=info msg="StopPodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\"" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.989884407Z" level=info msg="Ensure that sandbox 3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1 in task-service has been cleanup successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.990247999Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.990333087Z" level=info msg="TearDown network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.990356289Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" returns successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.994195947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.994568460Z" level=info msg="TearDown network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.994587581Z" level=info msg="StopPodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" returns successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.994954414Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\"" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.995541265Z" level=info msg="TearDown network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.995559867Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" returns successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.997198852Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.997438953Z" level=info msg="TearDown network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.997854229Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" returns successfully" Mar 17 17:39:18.999920 containerd[1622]: time="2025-03-17T17:39:18.998878679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:3,}" Mar 17 17:39:19.003554 kubelet[3064]: I0317 17:39:18.987714 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1" Mar 17 17:39:19.003554 kubelet[3064]: I0317 17:39:18.999584 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b" Mar 17 17:39:19.001802 systemd[1]: run-netns-cni\x2df09b5798\x2dddc6\x2da56f\x2dea8d\x2d12c6e64b1ed9.mount: Deactivated successfully. Mar 17 17:39:19.008690 containerd[1622]: time="2025-03-17T17:39:19.007934030Z" level=info msg="StopPodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\"" Mar 17 17:39:19.011734 containerd[1622]: time="2025-03-17T17:39:19.010291635Z" level=info msg="Ensure that sandbox 7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b in task-service has been cleanup successfully" Mar 17 17:39:19.012176 containerd[1622]: time="2025-03-17T17:39:19.012137356Z" level=info msg="TearDown network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" successfully" Mar 17 17:39:19.013930 containerd[1622]: time="2025-03-17T17:39:19.012382057Z" level=info msg="StopPodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" returns successfully" Mar 17 17:39:19.015740 kubelet[3064]: I0317 17:39:19.015713 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58" Mar 17 17:39:19.016607 containerd[1622]: time="2025-03-17T17:39:19.016549700Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\"" Mar 17 17:39:19.017072 containerd[1622]: time="2025-03-17T17:39:19.017042182Z" level=info msg="TearDown network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" successfully" Mar 17 17:39:19.017153 containerd[1622]: time="2025-03-17T17:39:19.017064904Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" returns successfully" Mar 17 17:39:19.018774 containerd[1622]: time="2025-03-17T17:39:19.017615072Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" Mar 17 17:39:19.018774 containerd[1622]: time="2025-03-17T17:39:19.017880535Z" level=info msg="StopPodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\"" Mar 17 17:39:19.018774 containerd[1622]: time="2025-03-17T17:39:19.018129837Z" level=info msg="TearDown network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" successfully" Mar 17 17:39:19.018774 containerd[1622]: time="2025-03-17T17:39:19.018162640Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" returns successfully" Mar 17 17:39:19.018774 containerd[1622]: time="2025-03-17T17:39:19.018273169Z" level=info msg="Ensure that sandbox 9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58 in task-service has been cleanup successfully" Mar 17 17:39:19.018774 containerd[1622]: time="2025-03-17T17:39:19.018740650Z" level=info msg="TearDown network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" successfully" Mar 17 17:39:19.020992 containerd[1622]: time="2025-03-17T17:39:19.019129084Z" level=info msg="StopPodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" returns successfully" Mar 17 17:39:19.020992 containerd[1622]: time="2025-03-17T17:39:19.019695813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:3,}" Mar 17 17:39:19.020992 containerd[1622]: time="2025-03-17T17:39:19.020186016Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\"" Mar 17 17:39:19.020992 containerd[1622]: time="2025-03-17T17:39:19.020378073Z" level=info msg="TearDown network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" successfully" Mar 17 17:39:19.020992 containerd[1622]: time="2025-03-17T17:39:19.020392354Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" returns successfully" Mar 17 17:39:19.021323 containerd[1622]: time="2025-03-17T17:39:19.020845833Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" Mar 17 17:39:19.021323 containerd[1622]: time="2025-03-17T17:39:19.021131058Z" level=info msg="TearDown network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" successfully" Mar 17 17:39:19.021323 containerd[1622]: time="2025-03-17T17:39:19.021172502Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" returns successfully" Mar 17 17:39:19.024942 containerd[1622]: time="2025-03-17T17:39:19.024017109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:3,}" Mar 17 17:39:19.025273 kubelet[3064]: I0317 17:39:19.025242 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa" Mar 17 17:39:19.030081 containerd[1622]: time="2025-03-17T17:39:19.028839049Z" level=info msg="StopPodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\"" Mar 17 17:39:19.030264 containerd[1622]: time="2025-03-17T17:39:19.030235250Z" level=info msg="Ensure that sandbox e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa in task-service has been cleanup successfully" Mar 17 17:39:19.031703 containerd[1622]: time="2025-03-17T17:39:19.031663974Z" level=info msg="TearDown network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" successfully" Mar 17 17:39:19.031703 containerd[1622]: time="2025-03-17T17:39:19.031692537Z" level=info msg="StopPodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" returns successfully" Mar 17 17:39:19.035036 containerd[1622]: time="2025-03-17T17:39:19.034853412Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\"" Mar 17 17:39:19.035036 containerd[1622]: time="2025-03-17T17:39:19.034998345Z" level=info msg="TearDown network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" successfully" Mar 17 17:39:19.035036 containerd[1622]: time="2025-03-17T17:39:19.035011586Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" returns successfully" Mar 17 17:39:19.037327 containerd[1622]: time="2025-03-17T17:39:19.037289904Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" Mar 17 17:39:19.037885 containerd[1622]: time="2025-03-17T17:39:19.037406634Z" level=info msg="TearDown network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" successfully" Mar 17 17:39:19.037885 containerd[1622]: time="2025-03-17T17:39:19.037418275Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" returns successfully" Mar 17 17:39:19.039252 containerd[1622]: time="2025-03-17T17:39:19.039216312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:3,}" Mar 17 17:39:19.185216 containerd[1622]: time="2025-03-17T17:39:19.185052999Z" level=error msg="Failed to destroy network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.187044 containerd[1622]: time="2025-03-17T17:39:19.186427119Z" level=error msg="encountered an error cleaning up failed sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.187044 containerd[1622]: time="2025-03-17T17:39:19.187004929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.187950 kubelet[3064]: E0317 17:39:19.187472 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.187950 kubelet[3064]: E0317 17:39:19.187551 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:19.187950 kubelet[3064]: E0317 17:39:19.187577 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:19.189712 kubelet[3064]: E0317 17:39:19.187659 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" podUID="aad6eeea-6774-44b9-911a-6a172c0b0cc4" Mar 17 17:39:19.236171 containerd[1622]: time="2025-03-17T17:39:19.235822296Z" level=error msg="Failed to destroy network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.238671 containerd[1622]: time="2025-03-17T17:39:19.238534212Z" level=error msg="encountered an error cleaning up failed sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.239922 containerd[1622]: time="2025-03-17T17:39:19.239144945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.242586 kubelet[3064]: E0317 17:39:19.242508 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.242586 kubelet[3064]: E0317 17:39:19.242569 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:19.242586 kubelet[3064]: E0317 17:39:19.242588 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:19.243023 kubelet[3064]: E0317 17:39:19.242625 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7t9bj" podUID="e34046eb-2ab6-4ffc-b664-99e06f707e68" Mar 17 17:39:19.263832 containerd[1622]: time="2025-03-17T17:39:19.263638796Z" level=error msg="Failed to destroy network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.264174 containerd[1622]: time="2025-03-17T17:39:19.264003468Z" level=error msg="encountered an error cleaning up failed sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.264174 containerd[1622]: time="2025-03-17T17:39:19.264070034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.264350 kubelet[3064]: E0317 17:39:19.264256 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.264350 kubelet[3064]: E0317 17:39:19.264323 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:19.264432 kubelet[3064]: E0317 17:39:19.264359 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:19.264432 kubelet[3064]: E0317 17:39:19.264396 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nwmf6" podUID="ec122e0b-85ab-491a-9aaa-6410b3df3402" Mar 17 17:39:19.306169 containerd[1622]: time="2025-03-17T17:39:19.305796144Z" level=error msg="Failed to destroy network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.308167 containerd[1622]: time="2025-03-17T17:39:19.307819440Z" level=error msg="encountered an error cleaning up failed sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.308167 containerd[1622]: time="2025-03-17T17:39:19.307917288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.309778 kubelet[3064]: E0317 17:39:19.309732 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.309910 kubelet[3064]: E0317 17:39:19.309793 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:19.309910 kubelet[3064]: E0317 17:39:19.309815 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:19.309910 kubelet[3064]: E0317 17:39:19.309852 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:19.319884 containerd[1622]: time="2025-03-17T17:39:19.319836645Z" level=error msg="Failed to destroy network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.321194 containerd[1622]: time="2025-03-17T17:39:19.320959143Z" level=error msg="encountered an error cleaning up failed sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.321589 containerd[1622]: time="2025-03-17T17:39:19.321226046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.321944 kubelet[3064]: E0317 17:39:19.321909 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.322189 kubelet[3064]: E0317 17:39:19.322163 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:19.322220 kubelet[3064]: E0317 17:39:19.322195 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:19.323594 kubelet[3064]: E0317 17:39:19.323477 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" podUID="e0f85a4a-625a-454c-bfe1-41a08087ea5f" Mar 17 17:39:19.329680 containerd[1622]: time="2025-03-17T17:39:19.329568892Z" level=error msg="Failed to destroy network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.330090 containerd[1622]: time="2025-03-17T17:39:19.330030252Z" level=error msg="encountered an error cleaning up failed sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.330156 containerd[1622]: time="2025-03-17T17:39:19.330118100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.330811 kubelet[3064]: E0317 17:39:19.330329 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:19.330811 kubelet[3064]: E0317 17:39:19.330424 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:19.330811 kubelet[3064]: E0317 17:39:19.330443 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:19.330934 kubelet[3064]: E0317 17:39:19.330482 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" podUID="734e536d-3518-4cef-9dfc-3553408c92a2" Mar 17 17:39:19.902600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec-shm.mount: Deactivated successfully. Mar 17 17:39:19.902746 systemd[1]: run-netns-cni\x2dfaeda132\x2de39e\x2d2058\x2d8143\x2da55c9b75f569.mount: Deactivated successfully. Mar 17 17:39:19.902832 systemd[1]: run-netns-cni\x2d403e1a3a\x2d8453\x2db5e0\x2ddf3e\x2d18d929561d61.mount: Deactivated successfully. Mar 17 17:39:19.902906 systemd[1]: run-netns-cni\x2d2e66a2fb\x2d025f\x2da4d8\x2dbd84\x2d46aba093c1a1.mount: Deactivated successfully. Mar 17 17:39:20.032167 kubelet[3064]: I0317 17:39:20.031379 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b" Mar 17 17:39:20.032654 containerd[1622]: time="2025-03-17T17:39:20.031972169Z" level=info msg="StopPodSandbox for \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\"" Mar 17 17:39:20.033656 containerd[1622]: time="2025-03-17T17:39:20.033188993Z" level=info msg="Ensure that sandbox 44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b in task-service has been cleanup successfully" Mar 17 17:39:20.035627 containerd[1622]: time="2025-03-17T17:39:20.035485871Z" level=info msg="TearDown network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\" successfully" Mar 17 17:39:20.035627 containerd[1622]: time="2025-03-17T17:39:20.035534155Z" level=info msg="StopPodSandbox for \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\" returns successfully" Mar 17 17:39:20.036972 containerd[1622]: time="2025-03-17T17:39:20.036198772Z" level=info msg="StopPodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\"" Mar 17 17:39:20.036972 containerd[1622]: time="2025-03-17T17:39:20.036409790Z" level=info msg="TearDown network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" successfully" Mar 17 17:39:20.036972 containerd[1622]: time="2025-03-17T17:39:20.036443273Z" level=info msg="StopPodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" returns successfully" Mar 17 17:39:20.036972 containerd[1622]: time="2025-03-17T17:39:20.036948397Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\"" Mar 17 17:39:20.039637 containerd[1622]: time="2025-03-17T17:39:20.038538293Z" level=info msg="TearDown network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" successfully" Mar 17 17:39:20.039637 containerd[1622]: time="2025-03-17T17:39:20.038560255Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" returns successfully" Mar 17 17:39:20.038806 systemd[1]: run-netns-cni\x2d215edbb8\x2dd8aa\x2df31f\x2dfd51\x2dab3ffeb5de03.mount: Deactivated successfully. Mar 17 17:39:20.041368 containerd[1622]: time="2025-03-17T17:39:20.040941140Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" Mar 17 17:39:20.041368 containerd[1622]: time="2025-03-17T17:39:20.041112955Z" level=info msg="TearDown network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" successfully" Mar 17 17:39:20.041368 containerd[1622]: time="2025-03-17T17:39:20.041124396Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" returns successfully" Mar 17 17:39:20.042221 containerd[1622]: time="2025-03-17T17:39:20.042180686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:4,}" Mar 17 17:39:20.043912 kubelet[3064]: I0317 17:39:20.043879 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166" Mar 17 17:39:20.044986 containerd[1622]: time="2025-03-17T17:39:20.044672660Z" level=info msg="StopPodSandbox for \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\"" Mar 17 17:39:20.044986 containerd[1622]: time="2025-03-17T17:39:20.044815193Z" level=info msg="Ensure that sandbox fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166 in task-service has been cleanup successfully" Mar 17 17:39:20.048037 systemd[1]: run-netns-cni\x2dfaa03b20\x2d460c\x2dd7e3\x2d8a92\x2d4c7879732b15.mount: Deactivated successfully. Mar 17 17:39:20.051612 containerd[1622]: time="2025-03-17T17:39:20.051528810Z" level=info msg="TearDown network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\" successfully" Mar 17 17:39:20.051612 containerd[1622]: time="2025-03-17T17:39:20.051558652Z" level=info msg="StopPodSandbox for \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\" returns successfully" Mar 17 17:39:20.054693 containerd[1622]: time="2025-03-17T17:39:20.054663039Z" level=info msg="StopPodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\"" Mar 17 17:39:20.055652 kubelet[3064]: I0317 17:39:20.055194 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991" Mar 17 17:39:20.056617 containerd[1622]: time="2025-03-17T17:39:20.056474315Z" level=info msg="TearDown network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" successfully" Mar 17 17:39:20.056617 containerd[1622]: time="2025-03-17T17:39:20.056502117Z" level=info msg="StopPodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" returns successfully" Mar 17 17:39:20.060900 containerd[1622]: time="2025-03-17T17:39:20.059641787Z" level=info msg="StopPodSandbox for \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\"" Mar 17 17:39:20.060900 containerd[1622]: time="2025-03-17T17:39:20.059813482Z" level=info msg="Ensure that sandbox d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991 in task-service has been cleanup successfully" Mar 17 17:39:20.060900 containerd[1622]: time="2025-03-17T17:39:20.059999258Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\"" Mar 17 17:39:20.060900 containerd[1622]: time="2025-03-17T17:39:20.060107627Z" level=info msg="TearDown network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" successfully" Mar 17 17:39:20.060900 containerd[1622]: time="2025-03-17T17:39:20.060118668Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" returns successfully" Mar 17 17:39:20.063613 systemd[1]: run-netns-cni\x2d3654a962\x2d8c17\x2dfd13\x2d8046\x2d8b34b05a73e9.mount: Deactivated successfully. Mar 17 17:39:20.066297 containerd[1622]: time="2025-03-17T17:39:20.064970165Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" Mar 17 17:39:20.066297 containerd[1622]: time="2025-03-17T17:39:20.065095176Z" level=info msg="TearDown network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" successfully" Mar 17 17:39:20.066297 containerd[1622]: time="2025-03-17T17:39:20.065119578Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" returns successfully" Mar 17 17:39:20.068128 containerd[1622]: time="2025-03-17T17:39:20.068094634Z" level=info msg="TearDown network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\" successfully" Mar 17 17:39:20.068280 containerd[1622]: time="2025-03-17T17:39:20.068262728Z" level=info msg="StopPodSandbox for \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\" returns successfully" Mar 17 17:39:20.069933 containerd[1622]: time="2025-03-17T17:39:20.069897228Z" level=info msg="StopPodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\"" Mar 17 17:39:20.070027 containerd[1622]: time="2025-03-17T17:39:20.069990276Z" level=info msg="TearDown network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" successfully" Mar 17 17:39:20.070027 containerd[1622]: time="2025-03-17T17:39:20.070000517Z" level=info msg="StopPodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" returns successfully" Mar 17 17:39:20.070648 containerd[1622]: time="2025-03-17T17:39:20.070498520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:39:20.071138 containerd[1622]: time="2025-03-17T17:39:20.071001283Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\"" Mar 17 17:39:20.071419 containerd[1622]: time="2025-03-17T17:39:20.071330032Z" level=info msg="TearDown network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" successfully" Mar 17 17:39:20.071419 containerd[1622]: time="2025-03-17T17:39:20.071407278Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" returns successfully" Mar 17 17:39:20.073131 containerd[1622]: time="2025-03-17T17:39:20.073095703Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" Mar 17 17:39:20.073207 containerd[1622]: time="2025-03-17T17:39:20.073196832Z" level=info msg="TearDown network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" successfully" Mar 17 17:39:20.073233 containerd[1622]: time="2025-03-17T17:39:20.073208673Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" returns successfully" Mar 17 17:39:20.073611 kubelet[3064]: I0317 17:39:20.073491 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec" Mar 17 17:39:20.073904 containerd[1622]: time="2025-03-17T17:39:20.073762601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:4,}" Mar 17 17:39:20.083220 containerd[1622]: time="2025-03-17T17:39:20.082489871Z" level=info msg="StopPodSandbox for \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\"" Mar 17 17:39:20.084467 containerd[1622]: time="2025-03-17T17:39:20.084383514Z" level=info msg="Ensure that sandbox d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec in task-service has been cleanup successfully" Mar 17 17:39:20.085690 containerd[1622]: time="2025-03-17T17:39:20.085490129Z" level=info msg="TearDown network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\" successfully" Mar 17 17:39:20.085690 containerd[1622]: time="2025-03-17T17:39:20.085525612Z" level=info msg="StopPodSandbox for \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\" returns successfully" Mar 17 17:39:20.089260 containerd[1622]: time="2025-03-17T17:39:20.088952026Z" level=info msg="StopPodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\"" Mar 17 17:39:20.089260 containerd[1622]: time="2025-03-17T17:39:20.089123001Z" level=info msg="TearDown network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" successfully" Mar 17 17:39:20.089260 containerd[1622]: time="2025-03-17T17:39:20.089148163Z" level=info msg="StopPodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" returns successfully" Mar 17 17:39:20.090129 containerd[1622]: time="2025-03-17T17:39:20.090101045Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\"" Mar 17 17:39:20.090218 containerd[1622]: time="2025-03-17T17:39:20.090192213Z" level=info msg="TearDown network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" successfully" Mar 17 17:39:20.090218 containerd[1622]: time="2025-03-17T17:39:20.090217015Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" returns successfully" Mar 17 17:39:20.091632 containerd[1622]: time="2025-03-17T17:39:20.091530128Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" Mar 17 17:39:20.094151 containerd[1622]: time="2025-03-17T17:39:20.091808112Z" level=info msg="TearDown network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" successfully" Mar 17 17:39:20.094151 containerd[1622]: time="2025-03-17T17:39:20.093305880Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" returns successfully" Mar 17 17:39:20.094313 kubelet[3064]: I0317 17:39:20.093810 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c" Mar 17 17:39:20.094921 containerd[1622]: time="2025-03-17T17:39:20.094896697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:39:20.097646 containerd[1622]: time="2025-03-17T17:39:20.097617691Z" level=info msg="StopPodSandbox for \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\"" Mar 17 17:39:20.099222 containerd[1622]: time="2025-03-17T17:39:20.098025566Z" level=info msg="Ensure that sandbox 9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c in task-service has been cleanup successfully" Mar 17 17:39:20.100199 containerd[1622]: time="2025-03-17T17:39:20.100173671Z" level=info msg="TearDown network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\" successfully" Mar 17 17:39:20.100667 containerd[1622]: time="2025-03-17T17:39:20.100647751Z" level=info msg="StopPodSandbox for \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\" returns successfully" Mar 17 17:39:20.102181 containerd[1622]: time="2025-03-17T17:39:20.102101156Z" level=info msg="StopPodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\"" Mar 17 17:39:20.102296 containerd[1622]: time="2025-03-17T17:39:20.102197285Z" level=info msg="TearDown network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" successfully" Mar 17 17:39:20.102296 containerd[1622]: time="2025-03-17T17:39:20.102207445Z" level=info msg="StopPodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" returns successfully" Mar 17 17:39:20.103120 containerd[1622]: time="2025-03-17T17:39:20.103093882Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\"" Mar 17 17:39:20.103211 containerd[1622]: time="2025-03-17T17:39:20.103170128Z" level=info msg="TearDown network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" successfully" Mar 17 17:39:20.103211 containerd[1622]: time="2025-03-17T17:39:20.103180209Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" returns successfully" Mar 17 17:39:20.106512 containerd[1622]: time="2025-03-17T17:39:20.106482533Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" Mar 17 17:39:20.108622 containerd[1622]: time="2025-03-17T17:39:20.108589194Z" level=info msg="TearDown network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" successfully" Mar 17 17:39:20.108883 kubelet[3064]: I0317 17:39:20.108861 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270" Mar 17 17:39:20.110292 containerd[1622]: time="2025-03-17T17:39:20.108620117Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" returns successfully" Mar 17 17:39:20.111492 containerd[1622]: time="2025-03-17T17:39:20.109808659Z" level=info msg="StopPodSandbox for \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\"" Mar 17 17:39:20.111492 containerd[1622]: time="2025-03-17T17:39:20.111485683Z" level=info msg="Ensure that sandbox 66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270 in task-service has been cleanup successfully" Mar 17 17:39:20.113304 containerd[1622]: time="2025-03-17T17:39:20.113265236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:4,}" Mar 17 17:39:20.113782 containerd[1622]: time="2025-03-17T17:39:20.113589944Z" level=info msg="TearDown network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\" successfully" Mar 17 17:39:20.113782 containerd[1622]: time="2025-03-17T17:39:20.113604585Z" level=info msg="StopPodSandbox for \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\" returns successfully" Mar 17 17:39:20.114165 containerd[1622]: time="2025-03-17T17:39:20.114135791Z" level=info msg="StopPodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\"" Mar 17 17:39:20.114493 containerd[1622]: time="2025-03-17T17:39:20.114467379Z" level=info msg="TearDown network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" successfully" Mar 17 17:39:20.114493 containerd[1622]: time="2025-03-17T17:39:20.114488101Z" level=info msg="StopPodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" returns successfully" Mar 17 17:39:20.115535 containerd[1622]: time="2025-03-17T17:39:20.115197282Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\"" Mar 17 17:39:20.115535 containerd[1622]: time="2025-03-17T17:39:20.115323933Z" level=info msg="TearDown network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" successfully" Mar 17 17:39:20.115535 containerd[1622]: time="2025-03-17T17:39:20.115334614Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" returns successfully" Mar 17 17:39:20.116964 containerd[1622]: time="2025-03-17T17:39:20.116017312Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" Mar 17 17:39:20.117598 containerd[1622]: time="2025-03-17T17:39:20.117098605Z" level=info msg="TearDown network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" successfully" Mar 17 17:39:20.117875 containerd[1622]: time="2025-03-17T17:39:20.117849990Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" returns successfully" Mar 17 17:39:20.120310 containerd[1622]: time="2025-03-17T17:39:20.120274238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:4,}" Mar 17 17:39:20.281247 containerd[1622]: time="2025-03-17T17:39:20.281185508Z" level=error msg="Failed to destroy network for sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.283677 containerd[1622]: time="2025-03-17T17:39:20.283626398Z" level=error msg="encountered an error cleaning up failed sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.283884 containerd[1622]: time="2025-03-17T17:39:20.283852858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.284765 kubelet[3064]: E0317 17:39:20.284725 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.284932 kubelet[3064]: E0317 17:39:20.284913 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:20.285026 kubelet[3064]: E0317 17:39:20.285010 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" Mar 17 17:39:20.285268 kubelet[3064]: E0317 17:39:20.285210 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-lpb8f_calico-apiserver(734e536d-3518-4cef-9dfc-3553408c92a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" podUID="734e536d-3518-4cef-9dfc-3553408c92a2" Mar 17 17:39:20.300509 containerd[1622]: time="2025-03-17T17:39:20.300433963Z" level=error msg="Failed to destroy network for sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.301058 containerd[1622]: time="2025-03-17T17:39:20.300982610Z" level=error msg="encountered an error cleaning up failed sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.301180 containerd[1622]: time="2025-03-17T17:39:20.301157945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.301562 kubelet[3064]: E0317 17:39:20.301520 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.303265 kubelet[3064]: E0317 17:39:20.301588 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:20.303265 kubelet[3064]: E0317 17:39:20.301608 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7t9bj" Mar 17 17:39:20.303265 kubelet[3064]: E0317 17:39:20.301652 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7t9bj_kube-system(e34046eb-2ab6-4ffc-b664-99e06f707e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7t9bj" podUID="e34046eb-2ab6-4ffc-b664-99e06f707e68" Mar 17 17:39:20.363795 containerd[1622]: time="2025-03-17T17:39:20.363497023Z" level=error msg="Failed to destroy network for sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.364943 containerd[1622]: time="2025-03-17T17:39:20.364887222Z" level=error msg="encountered an error cleaning up failed sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.365043 containerd[1622]: time="2025-03-17T17:39:20.364969589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.365363 kubelet[3064]: E0317 17:39:20.365223 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.365363 kubelet[3064]: E0317 17:39:20.365283 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:20.365363 kubelet[3064]: E0317 17:39:20.365304 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" Mar 17 17:39:20.365483 kubelet[3064]: E0317 17:39:20.365366 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b68c854f8-zw5m5_calico-system(e0f85a4a-625a-454c-bfe1-41a08087ea5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" podUID="e0f85a4a-625a-454c-bfe1-41a08087ea5f" Mar 17 17:39:20.406418 containerd[1622]: time="2025-03-17T17:39:20.406240217Z" level=error msg="Failed to destroy network for sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.406595 containerd[1622]: time="2025-03-17T17:39:20.406428073Z" level=error msg="Failed to destroy network for sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.408055 containerd[1622]: time="2025-03-17T17:39:20.407593973Z" level=error msg="encountered an error cleaning up failed sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.408055 containerd[1622]: time="2025-03-17T17:39:20.407668219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.408055 containerd[1622]: time="2025-03-17T17:39:20.407845314Z" level=error msg="encountered an error cleaning up failed sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.408055 containerd[1622]: time="2025-03-17T17:39:20.407876397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.409398 kubelet[3064]: E0317 17:39:20.408435 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.409398 kubelet[3064]: E0317 17:39:20.408495 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:20.409398 kubelet[3064]: E0317 17:39:20.408519 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhlt" Mar 17 17:39:20.409398 kubelet[3064]: E0317 17:39:20.408441 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.409602 kubelet[3064]: E0317 17:39:20.408557 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sjhlt_calico-system(85a3e01e-db6f-4f8d-a16f-72e5c08e4d07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sjhlt" podUID="85a3e01e-db6f-4f8d-a16f-72e5c08e4d07" Mar 17 17:39:20.409602 kubelet[3064]: E0317 17:39:20.408579 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:20.409602 kubelet[3064]: E0317 17:39:20.408598 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nwmf6" Mar 17 17:39:20.409707 kubelet[3064]: E0317 17:39:20.408631 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nwmf6_kube-system(ec122e0b-85ab-491a-9aaa-6410b3df3402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nwmf6" podUID="ec122e0b-85ab-491a-9aaa-6410b3df3402" Mar 17 17:39:20.415474 containerd[1622]: time="2025-03-17T17:39:20.415147662Z" level=error msg="Failed to destroy network for sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.415703 containerd[1622]: time="2025-03-17T17:39:20.415675107Z" level=error msg="encountered an error cleaning up failed sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.415900 containerd[1622]: time="2025-03-17T17:39:20.415800838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.416283 kubelet[3064]: E0317 17:39:20.416024 3064 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:39:20.416283 kubelet[3064]: E0317 17:39:20.416115 3064 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:20.416283 kubelet[3064]: E0317 17:39:20.416135 3064 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" Mar 17 17:39:20.416456 kubelet[3064]: E0317 17:39:20.416182 3064 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bd6cf464-82tgf_calico-apiserver(aad6eeea-6774-44b9-911a-6a172c0b0cc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" podUID="aad6eeea-6774-44b9-911a-6a172c0b0cc4" Mar 17 17:39:20.902361 containerd[1622]: time="2025-03-17T17:39:20.900368926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:20.903621 containerd[1622]: time="2025-03-17T17:39:20.903555520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 17 17:39:20.904588 containerd[1622]: time="2025-03-17T17:39:20.904541325Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:20.905459 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5-shm.mount: Deactivated successfully. Mar 17 17:39:20.905606 systemd[1]: run-netns-cni\x2d1bca57ab\x2d4bc9\x2ddc7a\x2d609d\x2d563c47a975e7.mount: Deactivated successfully. Mar 17 17:39:20.905692 systemd[1]: run-netns-cni\x2dbdb8a8b6\x2db3f8\x2dd749\x2d39b8\x2da768738079d7.mount: Deactivated successfully. Mar 17 17:39:20.905761 systemd[1]: run-netns-cni\x2dc158a7d8\x2db141\x2d92f8\x2d6232\x2d3d40c35dac10.mount: Deactivated successfully. Mar 17 17:39:20.905947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083371097.mount: Deactivated successfully. Mar 17 17:39:20.908948 containerd[1622]: time="2025-03-17T17:39:20.908662679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:20.910254 containerd[1622]: time="2025-03-17T17:39:20.910211932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 5.046249188s" Mar 17 17:39:20.910721 containerd[1622]: time="2025-03-17T17:39:20.910352464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 17 17:39:20.929614 containerd[1622]: time="2025-03-17T17:39:20.929569316Z" level=info msg="CreateContainer within sandbox \"14863ae275185ae4524d1d4462de71dcf8c77a58800371620266e34a07f925d5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:39:20.946394 containerd[1622]: time="2025-03-17T17:39:20.945569491Z" level=info msg="CreateContainer within sandbox \"14863ae275185ae4524d1d4462de71dcf8c77a58800371620266e34a07f925d5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b6b4d4870ca888b538582edde1a1adbce7084bf5537d09405df34c7aa67ae0a1\"" Mar 17 17:39:20.949854 containerd[1622]: time="2025-03-17T17:39:20.949792734Z" level=info msg="StartContainer for \"b6b4d4870ca888b538582edde1a1adbce7084bf5537d09405df34c7aa67ae0a1\"" Mar 17 17:39:21.026530 containerd[1622]: time="2025-03-17T17:39:21.026476539Z" level=info msg="StartContainer for \"b6b4d4870ca888b538582edde1a1adbce7084bf5537d09405df34c7aa67ae0a1\" returns successfully" Mar 17 17:39:21.118498 kubelet[3064]: I0317 17:39:21.118465 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726" Mar 17 17:39:21.122987 containerd[1622]: time="2025-03-17T17:39:21.122522336Z" level=info msg="StopPodSandbox for \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\"" Mar 17 17:39:21.122987 containerd[1622]: time="2025-03-17T17:39:21.122790639Z" level=info msg="Ensure that sandbox 93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726 in task-service has been cleanup successfully" Mar 17 17:39:21.123519 containerd[1622]: time="2025-03-17T17:39:21.123475737Z" level=info msg="TearDown network for sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\" successfully" Mar 17 17:39:21.123719 containerd[1622]: time="2025-03-17T17:39:21.123618549Z" level=info msg="StopPodSandbox for \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\" returns successfully" Mar 17 17:39:21.128590 containerd[1622]: time="2025-03-17T17:39:21.128466481Z" level=info msg="StopPodSandbox for \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\"" Mar 17 17:39:21.129093 containerd[1622]: time="2025-03-17T17:39:21.128867715Z" level=info msg="TearDown network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\" successfully" Mar 17 17:39:21.129093 containerd[1622]: time="2025-03-17T17:39:21.128899678Z" level=info msg="StopPodSandbox for \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\" returns successfully" Mar 17 17:39:21.133575 containerd[1622]: time="2025-03-17T17:39:21.132968303Z" level=info msg="StopPodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\"" Mar 17 17:39:21.135737 containerd[1622]: time="2025-03-17T17:39:21.134016432Z" level=info msg="TearDown network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" successfully" Mar 17 17:39:21.135737 containerd[1622]: time="2025-03-17T17:39:21.134057916Z" level=info msg="StopPodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" returns successfully" Mar 17 17:39:21.136428 containerd[1622]: time="2025-03-17T17:39:21.136104050Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\"" Mar 17 17:39:21.136428 containerd[1622]: time="2025-03-17T17:39:21.136328869Z" level=info msg="TearDown network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" successfully" Mar 17 17:39:21.137392 containerd[1622]: time="2025-03-17T17:39:21.136758265Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" returns successfully" Mar 17 17:39:21.137392 containerd[1622]: time="2025-03-17T17:39:21.137050330Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" Mar 17 17:39:21.137392 containerd[1622]: time="2025-03-17T17:39:21.137131537Z" level=info msg="TearDown network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" successfully" Mar 17 17:39:21.137392 containerd[1622]: time="2025-03-17T17:39:21.137189102Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" returns successfully" Mar 17 17:39:21.138523 containerd[1622]: time="2025-03-17T17:39:21.138218869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:5,}" Mar 17 17:39:21.152487 kubelet[3064]: I0317 17:39:21.152273 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xkvw6" podStartSLOduration=1.481335147 podStartE2EDuration="15.152249221s" podCreationTimestamp="2025-03-17 17:39:06 +0000 UTC" firstStartedPulling="2025-03-17 17:39:07.24013477 +0000 UTC m=+23.650436720" lastFinishedPulling="2025-03-17 17:39:20.911048844 +0000 UTC m=+37.321350794" observedRunningTime="2025-03-17 17:39:21.145460804 +0000 UTC m=+37.555762754" watchObservedRunningTime="2025-03-17 17:39:21.152249221 +0000 UTC m=+37.562551171" Mar 17 17:39:21.165863 kubelet[3064]: I0317 17:39:21.162677 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5" Mar 17 17:39:21.171446 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:39:21.171549 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:39:21.192102 containerd[1622]: time="2025-03-17T17:39:21.190790654Z" level=info msg="StopPodSandbox for \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\"" Mar 17 17:39:21.192102 containerd[1622]: time="2025-03-17T17:39:21.190965189Z" level=info msg="Ensure that sandbox 9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5 in task-service has been cleanup successfully" Mar 17 17:39:21.208335 containerd[1622]: time="2025-03-17T17:39:21.204665353Z" level=info msg="TearDown network for sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\" successfully" Mar 17 17:39:21.208335 containerd[1622]: time="2025-03-17T17:39:21.204695595Z" level=info msg="StopPodSandbox for \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\" returns successfully" Mar 17 17:39:21.208335 containerd[1622]: time="2025-03-17T17:39:21.206636720Z" level=info msg="StopPodSandbox for \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\"" Mar 17 17:39:21.213801 containerd[1622]: time="2025-03-17T17:39:21.213764285Z" level=info msg="TearDown network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\" successfully" Mar 17 17:39:21.217374 containerd[1622]: time="2025-03-17T17:39:21.216634209Z" level=info msg="StopPodSandbox for \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\" returns successfully" Mar 17 17:39:21.219107 containerd[1622]: time="2025-03-17T17:39:21.219008651Z" level=info msg="StopPodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\"" Mar 17 17:39:21.220045 containerd[1622]: time="2025-03-17T17:39:21.219440927Z" level=info msg="TearDown network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" successfully" Mar 17 17:39:21.220045 containerd[1622]: time="2025-03-17T17:39:21.219472410Z" level=info msg="StopPodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" returns successfully" Mar 17 17:39:21.222866 containerd[1622]: time="2025-03-17T17:39:21.220922573Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\"" Mar 17 17:39:21.222866 containerd[1622]: time="2025-03-17T17:39:21.221513063Z" level=info msg="StopPodSandbox for \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\"" Mar 17 17:39:21.222866 containerd[1622]: time="2025-03-17T17:39:21.222385377Z" level=info msg="Ensure that sandbox 96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d in task-service has been cleanup successfully" Mar 17 17:39:21.222950 kubelet[3064]: I0317 17:39:21.220305 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d" Mar 17 17:39:21.224314 containerd[1622]: time="2025-03-17T17:39:21.223055394Z" level=info msg="TearDown network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" successfully" Mar 17 17:39:21.224314 containerd[1622]: time="2025-03-17T17:39:21.223078716Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" returns successfully" Mar 17 17:39:21.225649 containerd[1622]: time="2025-03-17T17:39:21.225469999Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" Mar 17 17:39:21.227037 containerd[1622]: time="2025-03-17T17:39:21.226554692Z" level=info msg="TearDown network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" successfully" Mar 17 17:39:21.227773 containerd[1622]: time="2025-03-17T17:39:21.227455768Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" returns successfully" Mar 17 17:39:21.227773 containerd[1622]: time="2025-03-17T17:39:21.226973967Z" level=info msg="TearDown network for sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\" successfully" Mar 17 17:39:21.227773 containerd[1622]: time="2025-03-17T17:39:21.227614782Z" level=info msg="StopPodSandbox for \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\" returns successfully" Mar 17 17:39:21.228698 containerd[1622]: time="2025-03-17T17:39:21.228554061Z" level=info msg="StopPodSandbox for \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\"" Mar 17 17:39:21.228945 containerd[1622]: time="2025-03-17T17:39:21.228721396Z" level=info msg="TearDown network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\" successfully" Mar 17 17:39:21.228945 containerd[1622]: time="2025-03-17T17:39:21.228753838Z" level=info msg="StopPodSandbox for \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\" returns successfully" Mar 17 17:39:21.229650 containerd[1622]: time="2025-03-17T17:39:21.229043543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:5,}" Mar 17 17:39:21.230807 containerd[1622]: time="2025-03-17T17:39:21.230049068Z" level=info msg="StopPodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\"" Mar 17 17:39:21.230807 containerd[1622]: time="2025-03-17T17:39:21.230229604Z" level=info msg="TearDown network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" successfully" Mar 17 17:39:21.230807 containerd[1622]: time="2025-03-17T17:39:21.230245685Z" level=info msg="StopPodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" returns successfully" Mar 17 17:39:21.231685 containerd[1622]: time="2025-03-17T17:39:21.230980067Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\"" Mar 17 17:39:21.231685 containerd[1622]: time="2025-03-17T17:39:21.231087757Z" level=info msg="TearDown network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" successfully" Mar 17 17:39:21.231685 containerd[1622]: time="2025-03-17T17:39:21.231114599Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" returns successfully" Mar 17 17:39:21.231685 containerd[1622]: time="2025-03-17T17:39:21.231814698Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" Mar 17 17:39:21.231685 containerd[1622]: time="2025-03-17T17:39:21.231992393Z" level=info msg="TearDown network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" successfully" Mar 17 17:39:21.233092 containerd[1622]: time="2025-03-17T17:39:21.232423750Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" returns successfully" Mar 17 17:39:21.234990 kubelet[3064]: I0317 17:39:21.233737 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16" Mar 17 17:39:21.237042 containerd[1622]: time="2025-03-17T17:39:21.235617741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:39:21.238288 containerd[1622]: time="2025-03-17T17:39:21.237479619Z" level=info msg="StopPodSandbox for \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\"" Mar 17 17:39:21.239249 containerd[1622]: time="2025-03-17T17:39:21.238472944Z" level=info msg="Ensure that sandbox 8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16 in task-service has been cleanup successfully" Mar 17 17:39:21.239249 containerd[1622]: time="2025-03-17T17:39:21.238683922Z" level=info msg="TearDown network for sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\" successfully" Mar 17 17:39:21.239249 containerd[1622]: time="2025-03-17T17:39:21.238700683Z" level=info msg="StopPodSandbox for \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\" returns successfully" Mar 17 17:39:21.240230 containerd[1622]: time="2025-03-17T17:39:21.240194690Z" level=info msg="StopPodSandbox for \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\"" Mar 17 17:39:21.240299 containerd[1622]: time="2025-03-17T17:39:21.240290258Z" level=info msg="TearDown network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\" successfully" Mar 17 17:39:21.240324 containerd[1622]: time="2025-03-17T17:39:21.240301299Z" level=info msg="StopPodSandbox for \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\" returns successfully" Mar 17 17:39:21.241724 containerd[1622]: time="2025-03-17T17:39:21.240903510Z" level=info msg="StopPodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\"" Mar 17 17:39:21.241724 containerd[1622]: time="2025-03-17T17:39:21.240982357Z" level=info msg="TearDown network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" successfully" Mar 17 17:39:21.241724 containerd[1622]: time="2025-03-17T17:39:21.240992158Z" level=info msg="StopPodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" returns successfully" Mar 17 17:39:21.241860 kubelet[3064]: I0317 17:39:21.241200 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19" Mar 17 17:39:21.241984 containerd[1622]: time="2025-03-17T17:39:21.241947039Z" level=info msg="StopPodSandbox for \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\"" Mar 17 17:39:21.242095 containerd[1622]: time="2025-03-17T17:39:21.242075090Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\"" Mar 17 17:39:21.242504 containerd[1622]: time="2025-03-17T17:39:21.242482964Z" level=info msg="TearDown network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" successfully" Mar 17 17:39:21.242617 containerd[1622]: time="2025-03-17T17:39:21.242571052Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" returns successfully" Mar 17 17:39:21.242788 containerd[1622]: time="2025-03-17T17:39:21.242171458Z" level=info msg="Ensure that sandbox 6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19 in task-service has been cleanup successfully" Mar 17 17:39:21.243654 containerd[1622]: time="2025-03-17T17:39:21.243562416Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" Mar 17 17:39:21.243654 containerd[1622]: time="2025-03-17T17:39:21.243638662Z" level=info msg="TearDown network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" successfully" Mar 17 17:39:21.243654 containerd[1622]: time="2025-03-17T17:39:21.243647583Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" returns successfully" Mar 17 17:39:21.245263 containerd[1622]: time="2025-03-17T17:39:21.244624906Z" level=info msg="TearDown network for sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\" successfully" Mar 17 17:39:21.245263 containerd[1622]: time="2025-03-17T17:39:21.244651268Z" level=info msg="StopPodSandbox for \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\" returns successfully" Mar 17 17:39:21.245263 containerd[1622]: time="2025-03-17T17:39:21.244801561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:5,}" Mar 17 17:39:21.247387 containerd[1622]: time="2025-03-17T17:39:21.246083230Z" level=info msg="StopPodSandbox for \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\"" Mar 17 17:39:21.248679 containerd[1622]: time="2025-03-17T17:39:21.246655279Z" level=info msg="TearDown network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\" successfully" Mar 17 17:39:21.248679 containerd[1622]: time="2025-03-17T17:39:21.248618925Z" level=info msg="StopPodSandbox for \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\" returns successfully" Mar 17 17:39:21.253375 containerd[1622]: time="2025-03-17T17:39:21.250308549Z" level=info msg="StopPodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\"" Mar 17 17:39:21.253375 containerd[1622]: time="2025-03-17T17:39:21.250428719Z" level=info msg="TearDown network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" successfully" Mar 17 17:39:21.253375 containerd[1622]: time="2025-03-17T17:39:21.250440760Z" level=info msg="StopPodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" returns successfully" Mar 17 17:39:21.255585 kubelet[3064]: I0317 17:39:21.254896 3064 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca" Mar 17 17:39:21.255785 containerd[1622]: time="2025-03-17T17:39:21.255181643Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\"" Mar 17 17:39:21.258022 containerd[1622]: time="2025-03-17T17:39:21.255301293Z" level=info msg="TearDown network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" successfully" Mar 17 17:39:21.258022 containerd[1622]: time="2025-03-17T17:39:21.255897904Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" returns successfully" Mar 17 17:39:21.258022 containerd[1622]: time="2025-03-17T17:39:21.255448305Z" level=info msg="StopPodSandbox for \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\"" Mar 17 17:39:21.258022 containerd[1622]: time="2025-03-17T17:39:21.256118802Z" level=info msg="Ensure that sandbox 7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca in task-service has been cleanup successfully" Mar 17 17:39:21.258022 containerd[1622]: time="2025-03-17T17:39:21.256695091Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" Mar 17 17:39:21.258022 containerd[1622]: time="2025-03-17T17:39:21.257076724Z" level=info msg="TearDown network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" successfully" Mar 17 17:39:21.258022 containerd[1622]: time="2025-03-17T17:39:21.257092085Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" returns successfully" Mar 17 17:39:21.259401 containerd[1622]: time="2025-03-17T17:39:21.258302548Z" level=info msg="TearDown network for sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\" successfully" Mar 17 17:39:21.259401 containerd[1622]: time="2025-03-17T17:39:21.258327390Z" level=info msg="StopPodSandbox for \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\" returns successfully" Mar 17 17:39:21.259401 containerd[1622]: time="2025-03-17T17:39:21.259222546Z" level=info msg="StopPodSandbox for \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\"" Mar 17 17:39:21.259401 containerd[1622]: time="2025-03-17T17:39:21.259305793Z" level=info msg="TearDown network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\" successfully" Mar 17 17:39:21.259401 containerd[1622]: time="2025-03-17T17:39:21.259315394Z" level=info msg="StopPodSandbox for \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\" returns successfully" Mar 17 17:39:21.260815 containerd[1622]: time="2025-03-17T17:39:21.260575141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:5,}" Mar 17 17:39:21.271724 containerd[1622]: time="2025-03-17T17:39:21.271679764Z" level=info msg="StopPodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\"" Mar 17 17:39:21.271866 containerd[1622]: time="2025-03-17T17:39:21.271819576Z" level=info msg="TearDown network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" successfully" Mar 17 17:39:21.271866 containerd[1622]: time="2025-03-17T17:39:21.271846258Z" level=info msg="StopPodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" returns successfully" Mar 17 17:39:21.280540 containerd[1622]: time="2025-03-17T17:39:21.279543112Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\"" Mar 17 17:39:21.280540 containerd[1622]: time="2025-03-17T17:39:21.279896542Z" level=info msg="TearDown network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" successfully" Mar 17 17:39:21.280540 containerd[1622]: time="2025-03-17T17:39:21.280110520Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" returns successfully" Mar 17 17:39:21.282255 containerd[1622]: time="2025-03-17T17:39:21.281189012Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" Mar 17 17:39:21.282255 containerd[1622]: time="2025-03-17T17:39:21.282127891Z" level=info msg="TearDown network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" successfully" Mar 17 17:39:21.285533 containerd[1622]: time="2025-03-17T17:39:21.282286785Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" returns successfully" Mar 17 17:39:21.285533 containerd[1622]: time="2025-03-17T17:39:21.284849442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:39:21.852286 systemd-networkd[1244]: cali49caa81318a: Link UP Mar 17 17:39:21.852521 systemd-networkd[1244]: cali49caa81318a: Gained carrier Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.319 [INFO][4745] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.409 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0 csi-node-driver- calico-system 85a3e01e-db6f-4f8d-a16f-72e5c08e4d07 643 0 2025-03-17 17:39:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152-2-2-4-e17a7af1b1 csi-node-driver-sjhlt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali49caa81318a [] []}} ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Namespace="calico-system" Pod="csi-node-driver-sjhlt" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.410 [INFO][4745] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Namespace="calico-system" Pod="csi-node-driver-sjhlt" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.720 [INFO][4820] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" HandleID="k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.749 [INFO][4820] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" HandleID="k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000378a80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152-2-2-4-e17a7af1b1", "pod":"csi-node-driver-sjhlt", "timestamp":"2025-03-17 17:39:21.719958596 +0000 UTC"}, Hostname:"ci-4152-2-2-4-e17a7af1b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.750 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.750 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.750 [INFO][4820] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-2-4-e17a7af1b1' Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.756 [INFO][4820] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.772 [INFO][4820] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.788 [INFO][4820] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.793 [INFO][4820] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.799 [INFO][4820] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.799 [INFO][4820] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.804 [INFO][4820] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.815 [INFO][4820] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.830 [INFO][4820] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.1/26] block=192.168.79.0/26 handle="k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.831 [INFO][4820] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.1/26] handle="k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.831 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:21.930839 containerd[1622]: 2025-03-17 17:39:21.831 [INFO][4820] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.1/26] IPv6=[] ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" HandleID="k8s-pod-network.a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" Mar 17 17:39:21.928633 systemd[1]: run-netns-cni\x2d0b808d82\x2da048\x2dbbd8\x2de3da\x2d6b357a12c5f6.mount: Deactivated successfully. Mar 17 17:39:21.931766 containerd[1622]: 2025-03-17 17:39:21.838 [INFO][4745] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Namespace="calico-system" Pod="csi-node-driver-sjhlt" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85a3e01e-db6f-4f8d-a16f-72e5c08e4d07", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"", Pod:"csi-node-driver-sjhlt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49caa81318a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:21.931766 containerd[1622]: 2025-03-17 17:39:21.838 [INFO][4745] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.1/32] ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Namespace="calico-system" Pod="csi-node-driver-sjhlt" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" Mar 17 17:39:21.931766 containerd[1622]: 2025-03-17 17:39:21.838 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49caa81318a ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Namespace="calico-system" Pod="csi-node-driver-sjhlt" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" Mar 17 17:39:21.931766 containerd[1622]: 2025-03-17 17:39:21.853 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Namespace="calico-system" Pod="csi-node-driver-sjhlt" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" Mar 17 17:39:21.931766 containerd[1622]: 2025-03-17 17:39:21.863 [INFO][4745] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Namespace="calico-system" Pod="csi-node-driver-sjhlt" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85a3e01e-db6f-4f8d-a16f-72e5c08e4d07", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c", Pod:"csi-node-driver-sjhlt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49caa81318a", MAC:"16:31:23:76:6f:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:21.931766 containerd[1622]: 2025-03-17 17:39:21.905 [INFO][4745] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c" Namespace="calico-system" Pod="csi-node-driver-sjhlt" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-csi--node--driver--sjhlt-eth0" Mar 17 17:39:21.928772 systemd[1]: run-netns-cni\x2dd258896b\x2de9b1\x2db46a\x2dd0a9\x2d8e1d1c5bf4f7.mount: Deactivated successfully. Mar 17 17:39:21.928850 systemd[1]: run-netns-cni\x2d1e8d3ca8\x2df804\x2d5e53\x2dee8b\x2d1fcb045b7c4a.mount: Deactivated successfully. Mar 17 17:39:21.928926 systemd[1]: run-netns-cni\x2d8fea8f22\x2d60d9\x2d30de\x2de5da\x2ddd2246353105.mount: Deactivated successfully. Mar 17 17:39:21.928998 systemd[1]: run-netns-cni\x2dcaf6ecf8\x2dddcc\x2d041c\x2d2d20\x2d31f7065eafae.mount: Deactivated successfully. Mar 17 17:39:21.929071 systemd[1]: run-netns-cni\x2dc8d6ef81\x2d432b\x2d3ee0\x2d2d98\x2d77ab04152540.mount: Deactivated successfully. Mar 17 17:39:21.976036 systemd-networkd[1244]: cali40312a05a28: Link UP Mar 17 17:39:21.976903 systemd-networkd[1244]: cali40312a05a28: Gained carrier Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.434 [INFO][4789] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.465 [INFO][4789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0 coredns-7db6d8ff4d- kube-system ec122e0b-85ab-491a-9aaa-6410b3df3402 719 0 2025-03-17 17:38:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152-2-2-4-e17a7af1b1 coredns-7db6d8ff4d-nwmf6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40312a05a28 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nwmf6" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.466 [INFO][4789] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nwmf6" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.763 [INFO][4818] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" HandleID="k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.799 [INFO][4818] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" HandleID="k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005d8e30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152-2-2-4-e17a7af1b1", "pod":"coredns-7db6d8ff4d-nwmf6", "timestamp":"2025-03-17 17:39:21.7630949 +0000 UTC"}, Hostname:"ci-4152-2-2-4-e17a7af1b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.800 [INFO][4818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.831 [INFO][4818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.831 [INFO][4818] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-2-4-e17a7af1b1' Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.836 [INFO][4818] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.856 [INFO][4818] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.883 [INFO][4818] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.913 [INFO][4818] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.928 [INFO][4818] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.929 [INFO][4818] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.933 [INFO][4818] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.946 [INFO][4818] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.958 [INFO][4818] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.2/26] block=192.168.79.0/26 handle="k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.958 [INFO][4818] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.2/26] handle="k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.959 [INFO][4818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:22.008031 containerd[1622]: 2025-03-17 17:39:21.959 [INFO][4818] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.2/26] IPv6=[] ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" HandleID="k8s-pod-network.dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" Mar 17 17:39:22.008980 containerd[1622]: 2025-03-17 17:39:21.971 [INFO][4789] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nwmf6" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec122e0b-85ab-491a-9aaa-6410b3df3402", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"", Pod:"coredns-7db6d8ff4d-nwmf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40312a05a28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.008980 containerd[1622]: 2025-03-17 17:39:21.972 [INFO][4789] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.2/32] ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nwmf6" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" Mar 17 17:39:22.008980 containerd[1622]: 2025-03-17 17:39:21.973 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40312a05a28 ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nwmf6" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" Mar 17 17:39:22.008980 containerd[1622]: 2025-03-17 17:39:21.977 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nwmf6" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" Mar 17 17:39:22.008980 containerd[1622]: 2025-03-17 17:39:21.978 [INFO][4789] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nwmf6" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ec122e0b-85ab-491a-9aaa-6410b3df3402", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db", Pod:"coredns-7db6d8ff4d-nwmf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40312a05a28", MAC:"26:1e:20:a4:23:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.008980 containerd[1622]: 2025-03-17 17:39:21.992 [INFO][4789] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nwmf6" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--nwmf6-eth0" Mar 17 17:39:22.026656 containerd[1622]: time="2025-03-17T17:39:22.019042219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:22.026656 containerd[1622]: time="2025-03-17T17:39:22.019108865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:22.026656 containerd[1622]: time="2025-03-17T17:39:22.019124826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.026656 containerd[1622]: time="2025-03-17T17:39:22.019235796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.077683 systemd-networkd[1244]: calie6ff244126c: Link UP Mar 17 17:39:22.093578 systemd-networkd[1244]: calie6ff244126c: Gained carrier Mar 17 17:39:22.096427 systemd[1]: run-containerd-runc-k8s.io-b6b4d4870ca888b538582edde1a1adbce7084bf5537d09405df34c7aa67ae0a1-runc.F2MaEr.mount: Deactivated successfully. Mar 17 17:39:22.105441 containerd[1622]: time="2025-03-17T17:39:22.101892134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:22.105441 containerd[1622]: time="2025-03-17T17:39:22.101975461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:22.105441 containerd[1622]: time="2025-03-17T17:39:22.101991262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.105441 containerd[1622]: time="2025-03-17T17:39:22.102095231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.487 [INFO][4754] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.555 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0 calico-kube-controllers-6b68c854f8- calico-system e0f85a4a-625a-454c-bfe1-41a08087ea5f 722 0 2025-03-17 17:39:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b68c854f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152-2-2-4-e17a7af1b1 calico-kube-controllers-6b68c854f8-zw5m5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie6ff244126c [] []}} ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Namespace="calico-system" Pod="calico-kube-controllers-6b68c854f8-zw5m5" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.555 [INFO][4754] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Namespace="calico-system" Pod="calico-kube-controllers-6b68c854f8-zw5m5" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.811 [INFO][4835] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" HandleID="k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.837 [INFO][4835] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" HandleID="k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032f4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152-2-2-4-e17a7af1b1", "pod":"calico-kube-controllers-6b68c854f8-zw5m5", "timestamp":"2025-03-17 17:39:21.809744901 +0000 UTC"}, Hostname:"ci-4152-2-2-4-e17a7af1b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.837 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.958 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.959 [INFO][4835] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-2-4-e17a7af1b1' Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.966 [INFO][4835] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:21.989 [INFO][4835] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.002 [INFO][4835] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.005 [INFO][4835] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.009 [INFO][4835] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.010 [INFO][4835] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.014 [INFO][4835] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38 Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.026 [INFO][4835] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.046 [INFO][4835] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.3/26] block=192.168.79.0/26 handle="k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.046 [INFO][4835] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.3/26] handle="k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.046 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:22.145119 containerd[1622]: 2025-03-17 17:39:22.046 [INFO][4835] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.3/26] IPv6=[] ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" HandleID="k8s-pod-network.92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" Mar 17 17:39:22.145785 containerd[1622]: 2025-03-17 17:39:22.062 [INFO][4754] cni-plugin/k8s.go 386: Populated endpoint ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Namespace="calico-system" Pod="calico-kube-controllers-6b68c854f8-zw5m5" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0", GenerateName:"calico-kube-controllers-6b68c854f8-", Namespace:"calico-system", SelfLink:"", UID:"e0f85a4a-625a-454c-bfe1-41a08087ea5f", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b68c854f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"", Pod:"calico-kube-controllers-6b68c854f8-zw5m5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6ff244126c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.145785 containerd[1622]: 2025-03-17 17:39:22.063 [INFO][4754] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.3/32] ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Namespace="calico-system" Pod="calico-kube-controllers-6b68c854f8-zw5m5" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" Mar 17 17:39:22.145785 containerd[1622]: 2025-03-17 17:39:22.063 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6ff244126c ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Namespace="calico-system" Pod="calico-kube-controllers-6b68c854f8-zw5m5" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" Mar 17 17:39:22.145785 containerd[1622]: 2025-03-17 17:39:22.108 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Namespace="calico-system" Pod="calico-kube-controllers-6b68c854f8-zw5m5" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" Mar 17 17:39:22.145785 containerd[1622]: 2025-03-17 17:39:22.110 [INFO][4754] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Namespace="calico-system" Pod="calico-kube-controllers-6b68c854f8-zw5m5" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0", GenerateName:"calico-kube-controllers-6b68c854f8-", Namespace:"calico-system", SelfLink:"", UID:"e0f85a4a-625a-454c-bfe1-41a08087ea5f", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b68c854f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38", Pod:"calico-kube-controllers-6b68c854f8-zw5m5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6ff244126c", MAC:"1a:67:04:f7:ef:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.145785 containerd[1622]: 2025-03-17 17:39:22.138 [INFO][4754] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38" Namespace="calico-system" Pod="calico-kube-controllers-6b68c854f8-zw5m5" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--kube--controllers--6b68c854f8--zw5m5-eth0" Mar 17 17:39:22.184520 systemd-networkd[1244]: cali0d20b247352: Link UP Mar 17 17:39:22.185319 systemd-networkd[1244]: cali0d20b247352: Gained carrier Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:21.489 [INFO][4765] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:21.559 [INFO][4765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0 calico-apiserver-8bd6cf464- calico-apiserver 734e536d-3518-4cef-9dfc-3553408c92a2 723 0 2025-03-17 17:39:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8bd6cf464 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152-2-2-4-e17a7af1b1 calico-apiserver-8bd6cf464-lpb8f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0d20b247352 [] []}} ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-lpb8f" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:21.559 [INFO][4765] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-lpb8f" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:21.879 [INFO][4849] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" HandleID="k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:21.942 [INFO][4849] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" HandleID="k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400047a2e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152-2-2-4-e17a7af1b1", "pod":"calico-apiserver-8bd6cf464-lpb8f", "timestamp":"2025-03-17 17:39:21.879243364 +0000 UTC"}, Hostname:"ci-4152-2-2-4-e17a7af1b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:21.942 [INFO][4849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.047 [INFO][4849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.047 [INFO][4849] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-2-4-e17a7af1b1' Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.055 [INFO][4849] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.077 [INFO][4849] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.119 [INFO][4849] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.133 [INFO][4849] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.142 [INFO][4849] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.142 [INFO][4849] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.148 [INFO][4849] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7 Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.157 [INFO][4849] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.168 [INFO][4849] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.4/26] block=192.168.79.0/26 handle="k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.168 [INFO][4849] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.4/26] handle="k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.168 [INFO][4849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:22.228695 containerd[1622]: 2025-03-17 17:39:22.168 [INFO][4849] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.4/26] IPv6=[] ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" HandleID="k8s-pod-network.c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" Mar 17 17:39:22.229368 containerd[1622]: 2025-03-17 17:39:22.175 [INFO][4765] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-lpb8f" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0", GenerateName:"calico-apiserver-8bd6cf464-", Namespace:"calico-apiserver", SelfLink:"", UID:"734e536d-3518-4cef-9dfc-3553408c92a2", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bd6cf464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"", Pod:"calico-apiserver-8bd6cf464-lpb8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d20b247352", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.229368 containerd[1622]: 2025-03-17 17:39:22.176 [INFO][4765] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.4/32] ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-lpb8f" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" Mar 17 17:39:22.229368 containerd[1622]: 2025-03-17 17:39:22.176 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d20b247352 ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-lpb8f" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" Mar 17 17:39:22.229368 containerd[1622]: 2025-03-17 17:39:22.186 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-lpb8f" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" Mar 17 17:39:22.229368 containerd[1622]: 2025-03-17 17:39:22.193 [INFO][4765] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-lpb8f" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0", GenerateName:"calico-apiserver-8bd6cf464-", Namespace:"calico-apiserver", SelfLink:"", UID:"734e536d-3518-4cef-9dfc-3553408c92a2", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bd6cf464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7", Pod:"calico-apiserver-8bd6cf464-lpb8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d20b247352", MAC:"92:b7:27:b4:ca:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.229368 containerd[1622]: 2025-03-17 17:39:22.214 [INFO][4765] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-lpb8f" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--lpb8f-eth0" Mar 17 17:39:22.243041 containerd[1622]: time="2025-03-17T17:39:22.235969869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:22.243041 containerd[1622]: time="2025-03-17T17:39:22.242636548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:22.243041 containerd[1622]: time="2025-03-17T17:39:22.242656070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.243041 containerd[1622]: time="2025-03-17T17:39:22.242754398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.281909 systemd-networkd[1244]: cali035784f07da: Link UP Mar 17 17:39:22.283554 systemd-networkd[1244]: cali035784f07da: Gained carrier Mar 17 17:39:22.298991 containerd[1622]: time="2025-03-17T17:39:22.295448102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhlt,Uid:85a3e01e-db6f-4f8d-a16f-72e5c08e4d07,Namespace:calico-system,Attempt:5,} returns sandbox id \"a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c\"" Mar 17 17:39:22.311083 containerd[1622]: time="2025-03-17T17:39:22.311032090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwmf6,Uid:ec122e0b-85ab-491a-9aaa-6410b3df3402,Namespace:kube-system,Attempt:5,} returns sandbox id \"dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db\"" Mar 17 17:39:22.314749 containerd[1622]: time="2025-03-17T17:39:22.314715639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:21.581 [INFO][4800] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:21.631 [INFO][4800] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0 calico-apiserver-8bd6cf464- calico-apiserver aad6eeea-6774-44b9-911a-6a172c0b0cc4 721 0 2025-03-17 17:39:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8bd6cf464 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152-2-2-4-e17a7af1b1 calico-apiserver-8bd6cf464-82tgf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali035784f07da [] []}} ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-82tgf" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:21.635 [INFO][4800] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-82tgf" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:21.886 [INFO][4858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" HandleID="k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:21.946 [INFO][4858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" HandleID="k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039aef0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152-2-2-4-e17a7af1b1", "pod":"calico-apiserver-8bd6cf464-82tgf", "timestamp":"2025-03-17 17:39:21.886570146 +0000 UTC"}, Hostname:"ci-4152-2-2-4-e17a7af1b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:21.947 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.169 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.170 [INFO][4858] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-2-4-e17a7af1b1' Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.172 [INFO][4858] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.183 [INFO][4858] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.201 [INFO][4858] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.205 [INFO][4858] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.216 [INFO][4858] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.216 [INFO][4858] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.224 [INFO][4858] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8 Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.240 [INFO][4858] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.258 [INFO][4858] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.5/26] block=192.168.79.0/26 handle="k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.258 [INFO][4858] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.5/26] handle="k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.258 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:22.316835 containerd[1622]: 2025-03-17 17:39:22.258 [INFO][4858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.5/26] IPv6=[] ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" HandleID="k8s-pod-network.03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" Mar 17 17:39:22.317866 containerd[1622]: 2025-03-17 17:39:22.271 [INFO][4800] cni-plugin/k8s.go 386: Populated endpoint ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-82tgf" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0", GenerateName:"calico-apiserver-8bd6cf464-", Namespace:"calico-apiserver", SelfLink:"", UID:"aad6eeea-6774-44b9-911a-6a172c0b0cc4", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bd6cf464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"", Pod:"calico-apiserver-8bd6cf464-82tgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali035784f07da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.317866 containerd[1622]: 2025-03-17 17:39:22.271 [INFO][4800] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.5/32] ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-82tgf" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" Mar 17 17:39:22.317866 containerd[1622]: 2025-03-17 17:39:22.271 [INFO][4800] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali035784f07da ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-82tgf" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" Mar 17 17:39:22.317866 containerd[1622]: 2025-03-17 17:39:22.284 [INFO][4800] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-82tgf" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" Mar 17 17:39:22.317866 containerd[1622]: 2025-03-17 17:39:22.288 [INFO][4800] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-82tgf" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0", GenerateName:"calico-apiserver-8bd6cf464-", Namespace:"calico-apiserver", SelfLink:"", UID:"aad6eeea-6774-44b9-911a-6a172c0b0cc4", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 39, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bd6cf464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8", Pod:"calico-apiserver-8bd6cf464-82tgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali035784f07da", MAC:"42:2b:99:34:8f:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.317866 containerd[1622]: 2025-03-17 17:39:22.310 [INFO][4800] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8" Namespace="calico-apiserver" Pod="calico-apiserver-8bd6cf464-82tgf" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-calico--apiserver--8bd6cf464--82tgf-eth0" Mar 17 17:39:22.318083 containerd[1622]: time="2025-03-17T17:39:22.318001795Z" level=info msg="CreateContainer within sandbox \"dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:39:22.354166 containerd[1622]: time="2025-03-17T17:39:22.351594895Z" level=info msg="CreateContainer within sandbox \"dd90fe369406f7c632d08c0f022bc2aeb393e0e141f3d02b7533d4b13b9389db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"799e3a92edbe2b5c521eac0cb0757fdb08c86b8904b1d6f19dc5a28c08552c69\"" Mar 17 17:39:22.354166 containerd[1622]: time="2025-03-17T17:39:22.352901044Z" level=info msg="StartContainer for \"799e3a92edbe2b5c521eac0cb0757fdb08c86b8904b1d6f19dc5a28c08552c69\"" Mar 17 17:39:22.378101 systemd-networkd[1244]: calied0af949e4e: Link UP Mar 17 17:39:22.381071 systemd-networkd[1244]: calied0af949e4e: Gained carrier Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:21.564 [INFO][4778] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:21.608 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0 coredns-7db6d8ff4d- kube-system e34046eb-2ab6-4ffc-b664-99e06f707e68 720 0 2025-03-17 17:38:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152-2-2-4-e17a7af1b1 coredns-7db6d8ff4d-7t9bj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied0af949e4e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t9bj" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:21.608 [INFO][4778] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t9bj" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:21.917 [INFO][4845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" HandleID="k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:21.976 [INFO][4845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" HandleID="k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a3370), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152-2-2-4-e17a7af1b1", "pod":"coredns-7db6d8ff4d-7t9bj", "timestamp":"2025-03-17 17:39:21.91710946 +0000 UTC"}, Hostname:"ci-4152-2-2-4-e17a7af1b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:21.977 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.259 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.259 [INFO][4845] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-2-4-e17a7af1b1' Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.263 [INFO][4845] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.294 [INFO][4845] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.321 [INFO][4845] ipam/ipam.go 489: Trying affinity for 192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.326 [INFO][4845] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.332 [INFO][4845] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.0/26 host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.333 [INFO][4845] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.0/26 handle="k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.338 [INFO][4845] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701 Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.348 [INFO][4845] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.0/26 handle="k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.362 [INFO][4845] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.6/26] block=192.168.79.0/26 handle="k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.362 [INFO][4845] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.6/26] handle="k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" host="ci-4152-2-2-4-e17a7af1b1" Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.362 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:39:22.415018 containerd[1622]: 2025-03-17 17:39:22.362 [INFO][4845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.6/26] IPv6=[] ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" HandleID="k8s-pod-network.22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Workload="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" Mar 17 17:39:22.415610 containerd[1622]: 2025-03-17 17:39:22.367 [INFO][4778] cni-plugin/k8s.go 386: Populated endpoint ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t9bj" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e34046eb-2ab6-4ffc-b664-99e06f707e68", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"", Pod:"coredns-7db6d8ff4d-7t9bj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied0af949e4e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.415610 containerd[1622]: 2025-03-17 17:39:22.369 [INFO][4778] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.6/32] ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t9bj" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" Mar 17 17:39:22.415610 containerd[1622]: 2025-03-17 17:39:22.369 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied0af949e4e ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t9bj" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" Mar 17 17:39:22.415610 containerd[1622]: 2025-03-17 17:39:22.389 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t9bj" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" Mar 17 17:39:22.415610 containerd[1622]: 2025-03-17 17:39:22.392 [INFO][4778] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t9bj" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e34046eb-2ab6-4ffc-b664-99e06f707e68", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 38, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-2-4-e17a7af1b1", ContainerID:"22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701", Pod:"coredns-7db6d8ff4d-7t9bj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied0af949e4e", MAC:"de:bf:88:ac:ca:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:39:22.415610 containerd[1622]: 2025-03-17 17:39:22.408 [INFO][4778] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7t9bj" WorkloadEndpoint="ci--4152--2--2--4--e17a7af1b1-k8s-coredns--7db6d8ff4d--7t9bj-eth0" Mar 17 17:39:22.426462 containerd[1622]: time="2025-03-17T17:39:22.417992228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:22.426462 containerd[1622]: time="2025-03-17T17:39:22.418045073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:22.426462 containerd[1622]: time="2025-03-17T17:39:22.418057154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.426462 containerd[1622]: time="2025-03-17T17:39:22.418143241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.451279 containerd[1622]: time="2025-03-17T17:39:22.430120686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:22.451279 containerd[1622]: time="2025-03-17T17:39:22.430178211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:22.451279 containerd[1622]: time="2025-03-17T17:39:22.430193412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.451279 containerd[1622]: time="2025-03-17T17:39:22.430335544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.522506 containerd[1622]: time="2025-03-17T17:39:22.521858387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b68c854f8-zw5m5,Uid:e0f85a4a-625a-454c-bfe1-41a08087ea5f,Namespace:calico-system,Attempt:5,} returns sandbox id \"92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38\"" Mar 17 17:39:22.543757 containerd[1622]: time="2025-03-17T17:39:22.543357512Z" level=info msg="StartContainer for \"799e3a92edbe2b5c521eac0cb0757fdb08c86b8904b1d6f19dc5a28c08552c69\" returns successfully" Mar 17 17:39:22.623093 containerd[1622]: time="2025-03-17T17:39:22.622039116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:22.623093 containerd[1622]: time="2025-03-17T17:39:22.622105002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:22.623093 containerd[1622]: time="2025-03-17T17:39:22.622120683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.623093 containerd[1622]: time="2025-03-17T17:39:22.622230852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:22.639732 containerd[1622]: time="2025-03-17T17:39:22.639511063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-82tgf,Uid:aad6eeea-6774-44b9-911a-6a172c0b0cc4,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8\"" Mar 17 17:39:22.645146 containerd[1622]: time="2025-03-17T17:39:22.644852311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bd6cf464-lpb8f,Uid:734e536d-3518-4cef-9dfc-3553408c92a2,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7\"" Mar 17 17:39:22.688392 containerd[1622]: time="2025-03-17T17:39:22.688332881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7t9bj,Uid:e34046eb-2ab6-4ffc-b664-99e06f707e68,Namespace:kube-system,Attempt:5,} returns sandbox id \"22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701\"" Mar 17 17:39:22.693658 containerd[1622]: time="2025-03-17T17:39:22.693626646Z" level=info msg="CreateContainer within sandbox \"22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:39:22.708821 containerd[1622]: time="2025-03-17T17:39:22.708753835Z" level=info msg="CreateContainer within sandbox \"22971c818cc66552af100fa92e25bf9011a205050ff441b2efbdee21dd4bd701\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a951adc5a23d35474a95ca3c8f7e9277a307c2ef52ba8e722e7464a4eda7e74f\"" Mar 17 17:39:22.712901 containerd[1622]: time="2025-03-17T17:39:22.712588637Z" level=info msg="StartContainer for \"a951adc5a23d35474a95ca3c8f7e9277a307c2ef52ba8e722e7464a4eda7e74f\"" Mar 17 17:39:22.775183 containerd[1622]: time="2025-03-17T17:39:22.775123647Z" level=info msg="StartContainer for \"a951adc5a23d35474a95ca3c8f7e9277a307c2ef52ba8e722e7464a4eda7e74f\" returns successfully" Mar 17 17:39:23.175867 systemd-networkd[1244]: cali40312a05a28: Gained IPv6LL Mar 17 17:39:23.176426 systemd-networkd[1244]: calie6ff244126c: Gained IPv6LL Mar 17 17:39:23.310854 kubelet[3064]: I0317 17:39:23.310787 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nwmf6" podStartSLOduration=26.310767954 podStartE2EDuration="26.310767954s" podCreationTimestamp="2025-03-17 17:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:39:23.28902639 +0000 UTC m=+39.699328420" watchObservedRunningTime="2025-03-17 17:39:23.310767954 +0000 UTC m=+39.721069904" Mar 17 17:39:23.376829 kubelet[3064]: I0317 17:39:23.374982 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7t9bj" podStartSLOduration=26.374961601 podStartE2EDuration="26.374961601s" podCreationTimestamp="2025-03-17 17:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:39:23.350307235 +0000 UTC m=+39.760609185" watchObservedRunningTime="2025-03-17 17:39:23.374961601 +0000 UTC m=+39.785263511" Mar 17 17:39:23.751733 systemd-networkd[1244]: cali49caa81318a: Gained IPv6LL Mar 17 17:39:23.944119 systemd-networkd[1244]: cali0d20b247352: Gained IPv6LL Mar 17 17:39:23.951401 containerd[1622]: time="2025-03-17T17:39:23.951130815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:23.953067 containerd[1622]: time="2025-03-17T17:39:23.953002730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 17 17:39:23.953981 containerd[1622]: time="2025-03-17T17:39:23.953909646Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:23.956861 containerd[1622]: time="2025-03-17T17:39:23.956823007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:23.957915 containerd[1622]: time="2025-03-17T17:39:23.957804809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.642903554s" Mar 17 17:39:23.957915 containerd[1622]: time="2025-03-17T17:39:23.957835011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 17 17:39:23.961037 containerd[1622]: time="2025-03-17T17:39:23.960694769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:39:23.962279 containerd[1622]: time="2025-03-17T17:39:23.962243337Z" level=info msg="CreateContainer within sandbox \"a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:39:23.980446 containerd[1622]: time="2025-03-17T17:39:23.980378842Z" level=info msg="CreateContainer within sandbox \"a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b104afb3f534195e9d7089b15d8e616a827c6afd38b6a821d9a9cf7191f81bdb\"" Mar 17 17:39:23.983388 containerd[1622]: time="2025-03-17T17:39:23.981941252Z" level=info msg="StartContainer for \"b104afb3f534195e9d7089b15d8e616a827c6afd38b6a821d9a9cf7191f81bdb\"" Mar 17 17:39:24.072473 systemd-networkd[1244]: calied0af949e4e: Gained IPv6LL Mar 17 17:39:24.072793 systemd-networkd[1244]: cali035784f07da: Gained IPv6LL Mar 17 17:39:24.112037 containerd[1622]: time="2025-03-17T17:39:24.110985058Z" level=info msg="StartContainer for \"b104afb3f534195e9d7089b15d8e616a827c6afd38b6a821d9a9cf7191f81bdb\" returns successfully" Mar 17 17:39:26.959775 containerd[1622]: time="2025-03-17T17:39:26.958883366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:26.959775 containerd[1622]: time="2025-03-17T17:39:26.959683830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 17 17:39:26.960427 containerd[1622]: time="2025-03-17T17:39:26.960388287Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:26.962800 containerd[1622]: time="2025-03-17T17:39:26.962764117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:26.963564 containerd[1622]: time="2025-03-17T17:39:26.963533099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 3.002805567s" Mar 17 17:39:26.963705 containerd[1622]: time="2025-03-17T17:39:26.963686672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 17 17:39:26.965008 containerd[1622]: time="2025-03-17T17:39:26.964980255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:39:26.987858 containerd[1622]: time="2025-03-17T17:39:26.987544867Z" level=info msg="CreateContainer within sandbox \"92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:39:27.012881 containerd[1622]: time="2025-03-17T17:39:27.012823806Z" level=info msg="CreateContainer within sandbox \"92a6df4bc7e0d16c72e2abee9150a1956b970f9a8422276276a66c32582f4b38\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c0e3fff68b8e075d948b80b8efd36d9044a14203bac6b2dbbd0546378ed4347e\"" Mar 17 17:39:27.014533 containerd[1622]: time="2025-03-17T17:39:27.013738799Z" level=info msg="StartContainer for \"c0e3fff68b8e075d948b80b8efd36d9044a14203bac6b2dbbd0546378ed4347e\"" Mar 17 17:39:27.088850 containerd[1622]: time="2025-03-17T17:39:27.088786682Z" level=info msg="StartContainer for \"c0e3fff68b8e075d948b80b8efd36d9044a14203bac6b2dbbd0546378ed4347e\" returns successfully" Mar 17 17:39:27.347526 update_engine[1610]: I20250317 17:39:27.347398 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:39:27.348366 update_engine[1610]: I20250317 17:39:27.347607 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:39:27.348366 update_engine[1610]: I20250317 17:39:27.347820 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:39:27.350739 update_engine[1610]: E20250317 17:39:27.349282 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.350788 1610 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.350812 1610 omaha_request_action.cc:617] Omaha request response: Mar 17 17:39:27.351105 update_engine[1610]: E20250317 17:39:27.350888 1610 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.350906 1610 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.350914 1610 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.350919 1610 update_attempter.cc:306] Processing Done. Mar 17 17:39:27.351105 update_engine[1610]: E20250317 17:39:27.350934 1610 update_attempter.cc:619] Update failed. Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.350941 1610 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.350947 1610 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.350960 1610 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.351040 1610 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.351066 1610 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:39:27.351105 update_engine[1610]: I20250317 17:39:27.351073 1610 omaha_request_action.cc:272] Request: Mar 17 17:39:27.351105 update_engine[1610]: Mar 17 17:39:27.351105 update_engine[1610]: Mar 17 17:39:27.351105 update_engine[1610]: Mar 17 17:39:27.351105 update_engine[1610]: Mar 17 17:39:27.351105 update_engine[1610]: Mar 17 17:39:27.351105 update_engine[1610]: Mar 17 17:39:27.351531 update_engine[1610]: I20250317 17:39:27.351081 1610 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:39:27.351531 update_engine[1610]: I20250317 17:39:27.351236 1610 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:39:27.354532 update_engine[1610]: I20250317 17:39:27.352316 1610 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:39:27.354532 update_engine[1610]: E20250317 17:39:27.352931 1610 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:39:27.354532 update_engine[1610]: I20250317 17:39:27.352999 1610 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:39:27.354532 update_engine[1610]: I20250317 17:39:27.353009 1610 omaha_request_action.cc:617] Omaha request response: Mar 17 17:39:27.354532 update_engine[1610]: I20250317 17:39:27.353017 1610 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:39:27.354532 update_engine[1610]: I20250317 17:39:27.353022 1610 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:39:27.354532 update_engine[1610]: I20250317 17:39:27.353026 1610 update_attempter.cc:306] Processing Done. Mar 17 17:39:27.354532 update_engine[1610]: I20250317 17:39:27.353034 1610 update_attempter.cc:310] Error event sent. Mar 17 17:39:27.354532 update_engine[1610]: I20250317 17:39:27.353042 1610 update_check_scheduler.cc:74] Next update check in 48m58s Mar 17 17:39:27.358206 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 17:39:27.358206 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 17:39:27.374888 kubelet[3064]: I0317 17:39:27.374445 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b68c854f8-zw5m5" podStartSLOduration=16.935159709 podStartE2EDuration="21.374426856s" podCreationTimestamp="2025-03-17 17:39:06 +0000 UTC" firstStartedPulling="2025-03-17 17:39:22.525292475 +0000 UTC m=+38.935594425" lastFinishedPulling="2025-03-17 17:39:26.964559622 +0000 UTC m=+43.374861572" observedRunningTime="2025-03-17 17:39:27.374194518 +0000 UTC m=+43.784496468" watchObservedRunningTime="2025-03-17 17:39:27.374426856 +0000 UTC m=+43.784728806" Mar 17 17:39:29.915461 containerd[1622]: time="2025-03-17T17:39:29.915416416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:29.917386 containerd[1622]: time="2025-03-17T17:39:29.917284441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 17 17:39:29.918326 containerd[1622]: time="2025-03-17T17:39:29.918269598Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:29.922418 containerd[1622]: time="2025-03-17T17:39:29.921575135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:29.922539 containerd[1622]: time="2025-03-17T17:39:29.922426361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 2.95736574s" Mar 17 17:39:29.922539 containerd[1622]: time="2025-03-17T17:39:29.922454524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 17 17:39:29.925914 containerd[1622]: time="2025-03-17T17:39:29.925656013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:39:29.928708 containerd[1622]: time="2025-03-17T17:39:29.928567959Z" level=info msg="CreateContainer within sandbox \"03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:39:29.949181 containerd[1622]: time="2025-03-17T17:39:29.949136401Z" level=info msg="CreateContainer within sandbox \"03fddaf6aaa0c166eb1ca5da66f0454995a2921ae64d5a2cb763e9cc961e96c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"23687c26b27843633cb142c4f31a77beca9ea4f4c3b9638891dfa8115135cf8d\"" Mar 17 17:39:29.950011 containerd[1622]: time="2025-03-17T17:39:29.949983027Z" level=info msg="StartContainer for \"23687c26b27843633cb142c4f31a77beca9ea4f4c3b9638891dfa8115135cf8d\"" Mar 17 17:39:30.030859 containerd[1622]: time="2025-03-17T17:39:30.030801575Z" level=info msg="StartContainer for \"23687c26b27843633cb142c4f31a77beca9ea4f4c3b9638891dfa8115135cf8d\" returns successfully" Mar 17 17:39:30.337129 containerd[1622]: time="2025-03-17T17:39:30.337077063Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:30.339484 containerd[1622]: time="2025-03-17T17:39:30.339434684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 17 17:39:30.341826 containerd[1622]: time="2025-03-17T17:39:30.341740782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 416.047406ms" Mar 17 17:39:30.341889 containerd[1622]: time="2025-03-17T17:39:30.341830349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 17 17:39:30.344016 containerd[1622]: time="2025-03-17T17:39:30.343977035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:39:30.346563 containerd[1622]: time="2025-03-17T17:39:30.345097041Z" level=info msg="CreateContainer within sandbox \"c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:39:30.396380 kubelet[3064]: I0317 17:39:30.393299 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8bd6cf464-82tgf" podStartSLOduration=17.113548412 podStartE2EDuration="24.393280515s" podCreationTimestamp="2025-03-17 17:39:06 +0000 UTC" firstStartedPulling="2025-03-17 17:39:22.644049284 +0000 UTC m=+39.054351194" lastFinishedPulling="2025-03-17 17:39:29.923781347 +0000 UTC m=+46.334083297" observedRunningTime="2025-03-17 17:39:30.39322315 +0000 UTC m=+46.803525100" watchObservedRunningTime="2025-03-17 17:39:30.393280515 +0000 UTC m=+46.803582465" Mar 17 17:39:30.400907 containerd[1622]: time="2025-03-17T17:39:30.399503754Z" level=info msg="CreateContainer within sandbox \"c824d571639cfc635b907dceef45882e4bac908d3d4a5f370ee9957b474be6d7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"40d571d3e624ab1f5a6f020c6851c0e8860f2e87700f58037f3d1e3553a6208f\"" Mar 17 17:39:30.402936 containerd[1622]: time="2025-03-17T17:39:30.402890015Z" level=info msg="StartContainer for \"40d571d3e624ab1f5a6f020c6851c0e8860f2e87700f58037f3d1e3553a6208f\"" Mar 17 17:39:30.548485 containerd[1622]: time="2025-03-17T17:39:30.548270021Z" level=info msg="StartContainer for \"40d571d3e624ab1f5a6f020c6851c0e8860f2e87700f58037f3d1e3553a6208f\" returns successfully" Mar 17 17:39:30.602609 kubelet[3064]: I0317 17:39:30.602159 3064 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:31.392621 kubelet[3064]: I0317 17:39:31.392548 3064 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:31.601372 kernel: bpftool[5822]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:39:32.121821 containerd[1622]: time="2025-03-17T17:39:32.120612020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:32.123862 containerd[1622]: time="2025-03-17T17:39:32.123811942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 17 17:39:32.126088 containerd[1622]: time="2025-03-17T17:39:32.126044391Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:32.129628 containerd[1622]: time="2025-03-17T17:39:32.129593939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:32.132609 containerd[1622]: time="2025-03-17T17:39:32.132478197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.78845664s" Mar 17 17:39:32.132609 containerd[1622]: time="2025-03-17T17:39:32.132516520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 17 17:39:32.136229 containerd[1622]: time="2025-03-17T17:39:32.136081230Z" level=info msg="CreateContainer within sandbox \"a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:39:32.158643 containerd[1622]: time="2025-03-17T17:39:32.158484284Z" level=info msg="CreateContainer within sandbox \"a764ec7414fefae9b5f7685d0cd6f7844ff466cc95031e4fe2bda29d746b829c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a685e4e0efac6e2e02e85783a5c9df6b21d0ec375a3d7a30bcb93008d1a60ec3\"" Mar 17 17:39:32.160509 containerd[1622]: time="2025-03-17T17:39:32.160170931Z" level=info msg="StartContainer for \"a685e4e0efac6e2e02e85783a5c9df6b21d0ec375a3d7a30bcb93008d1a60ec3\"" Mar 17 17:39:32.258303 systemd-networkd[1244]: vxlan.calico: Link UP Mar 17 17:39:32.258308 systemd-networkd[1244]: vxlan.calico: Gained carrier Mar 17 17:39:32.293445 containerd[1622]: time="2025-03-17T17:39:32.293249835Z" level=info msg="StartContainer for \"a685e4e0efac6e2e02e85783a5c9df6b21d0ec375a3d7a30bcb93008d1a60ec3\" returns successfully" Mar 17 17:39:32.400194 kubelet[3064]: I0317 17:39:32.400089 3064 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:32.414730 kubelet[3064]: I0317 17:39:32.414664 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8bd6cf464-lpb8f" podStartSLOduration=18.720113908 podStartE2EDuration="26.414645814s" podCreationTimestamp="2025-03-17 17:39:06 +0000 UTC" firstStartedPulling="2025-03-17 17:39:22.648185831 +0000 UTC m=+39.058487741" lastFinishedPulling="2025-03-17 17:39:30.342717697 +0000 UTC m=+46.753019647" observedRunningTime="2025-03-17 17:39:31.408894775 +0000 UTC m=+47.819196685" watchObservedRunningTime="2025-03-17 17:39:32.414645814 +0000 UTC m=+48.824947764" Mar 17 17:39:32.829175 kubelet[3064]: I0317 17:39:32.829132 3064 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:39:32.829175 kubelet[3064]: I0317 17:39:32.829185 3064 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:39:34.121248 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Mar 17 17:39:39.362793 kubelet[3064]: I0317 17:39:39.362413 3064 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:39:39.404709 kubelet[3064]: I0317 17:39:39.402023 3064 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sjhlt" podStartSLOduration=23.58044927 podStartE2EDuration="33.40199497s" podCreationTimestamp="2025-03-17 17:39:06 +0000 UTC" firstStartedPulling="2025-03-17 17:39:22.31281844 +0000 UTC m=+38.723120390" lastFinishedPulling="2025-03-17 17:39:32.13436418 +0000 UTC m=+48.544666090" observedRunningTime="2025-03-17 17:39:32.416066922 +0000 UTC m=+48.826368872" watchObservedRunningTime="2025-03-17 17:39:39.40199497 +0000 UTC m=+55.812296960" Mar 17 17:39:43.718362 containerd[1622]: time="2025-03-17T17:39:43.717298173Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" Mar 17 17:39:43.718362 containerd[1622]: time="2025-03-17T17:39:43.717506147Z" level=info msg="TearDown network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" successfully" Mar 17 17:39:43.718362 containerd[1622]: time="2025-03-17T17:39:43.717520989Z" level=info msg="StopPodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" returns successfully" Mar 17 17:39:43.718362 containerd[1622]: time="2025-03-17T17:39:43.718208596Z" level=info msg="RemovePodSandbox for \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" Mar 17 17:39:43.718362 containerd[1622]: time="2025-03-17T17:39:43.718239798Z" level=info msg="Forcibly stopping sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\"" Mar 17 17:39:43.719001 containerd[1622]: time="2025-03-17T17:39:43.718365527Z" level=info msg="TearDown network for sandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" successfully" Mar 17 17:39:43.723540 containerd[1622]: time="2025-03-17T17:39:43.723422596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.723540 containerd[1622]: time="2025-03-17T17:39:43.723490121Z" level=info msg="RemovePodSandbox \"4f830651efec8e8cb2c71546c78eb9d4d1634e0a19d4c12e79a29363d2ec2f14\" returns successfully" Mar 17 17:39:43.724327 containerd[1622]: time="2025-03-17T17:39:43.723915150Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\"" Mar 17 17:39:43.724327 containerd[1622]: time="2025-03-17T17:39:43.724008196Z" level=info msg="TearDown network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" successfully" Mar 17 17:39:43.724327 containerd[1622]: time="2025-03-17T17:39:43.724017597Z" level=info msg="StopPodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" returns successfully" Mar 17 17:39:43.724327 containerd[1622]: time="2025-03-17T17:39:43.724303537Z" level=info msg="RemovePodSandbox for \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\"" Mar 17 17:39:43.724327 containerd[1622]: time="2025-03-17T17:39:43.724326978Z" level=info msg="Forcibly stopping sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\"" Mar 17 17:39:43.724489 containerd[1622]: time="2025-03-17T17:39:43.724415265Z" level=info msg="TearDown network for sandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" successfully" Mar 17 17:39:43.728457 containerd[1622]: time="2025-03-17T17:39:43.728410020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.729183 containerd[1622]: time="2025-03-17T17:39:43.728468584Z" level=info msg="RemovePodSandbox \"e630b8a164e4174c51d673bdf89de930bf44c4afd0b5098f340af1c3a0fb1923\" returns successfully" Mar 17 17:39:43.729183 containerd[1622]: time="2025-03-17T17:39:43.728978220Z" level=info msg="StopPodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\"" Mar 17 17:39:43.729183 containerd[1622]: time="2025-03-17T17:39:43.729092428Z" level=info msg="TearDown network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" successfully" Mar 17 17:39:43.729183 containerd[1622]: time="2025-03-17T17:39:43.729107429Z" level=info msg="StopPodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" returns successfully" Mar 17 17:39:43.730026 containerd[1622]: time="2025-03-17T17:39:43.729836839Z" level=info msg="RemovePodSandbox for \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\"" Mar 17 17:39:43.730026 containerd[1622]: time="2025-03-17T17:39:43.729871721Z" level=info msg="Forcibly stopping sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\"" Mar 17 17:39:43.730026 containerd[1622]: time="2025-03-17T17:39:43.729944686Z" level=info msg="TearDown network for sandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" successfully" Mar 17 17:39:43.734251 containerd[1622]: time="2025-03-17T17:39:43.734036249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.734251 containerd[1622]: time="2025-03-17T17:39:43.734101053Z" level=info msg="RemovePodSandbox \"e538ffa7ddd1f87cb357dc229aa52a9bb822c1d9c6fd442bec82a81448f398fa\" returns successfully" Mar 17 17:39:43.734861 containerd[1622]: time="2025-03-17T17:39:43.734592047Z" level=info msg="StopPodSandbox for \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\"" Mar 17 17:39:43.734861 containerd[1622]: time="2025-03-17T17:39:43.734731377Z" level=info msg="TearDown network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\" successfully" Mar 17 17:39:43.734861 containerd[1622]: time="2025-03-17T17:39:43.734743458Z" level=info msg="StopPodSandbox for \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\" returns successfully" Mar 17 17:39:43.735334 containerd[1622]: time="2025-03-17T17:39:43.735307897Z" level=info msg="RemovePodSandbox for \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\"" Mar 17 17:39:43.735334 containerd[1622]: time="2025-03-17T17:39:43.735360780Z" level=info msg="Forcibly stopping sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\"" Mar 17 17:39:43.735472 containerd[1622]: time="2025-03-17T17:39:43.735443986Z" level=info msg="TearDown network for sandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\" successfully" Mar 17 17:39:43.745953 containerd[1622]: time="2025-03-17T17:39:43.745910589Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.745953 containerd[1622]: time="2025-03-17T17:39:43.745982434Z" level=info msg="RemovePodSandbox \"44b779c8beee74c65914b58df2f4cb80cd7c544d1f66306793961b4d2a19a18b\" returns successfully" Mar 17 17:39:43.747178 containerd[1622]: time="2025-03-17T17:39:43.746662881Z" level=info msg="StopPodSandbox for \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\"" Mar 17 17:39:43.747178 containerd[1622]: time="2025-03-17T17:39:43.746834733Z" level=info msg="TearDown network for sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\" successfully" Mar 17 17:39:43.747178 containerd[1622]: time="2025-03-17T17:39:43.746852894Z" level=info msg="StopPodSandbox for \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\" returns successfully" Mar 17 17:39:43.749242 containerd[1622]: time="2025-03-17T17:39:43.748453685Z" level=info msg="RemovePodSandbox for \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\"" Mar 17 17:39:43.749242 containerd[1622]: time="2025-03-17T17:39:43.748502528Z" level=info msg="Forcibly stopping sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\"" Mar 17 17:39:43.749242 containerd[1622]: time="2025-03-17T17:39:43.748632297Z" level=info msg="TearDown network for sandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\" successfully" Mar 17 17:39:43.752699 containerd[1622]: time="2025-03-17T17:39:43.752656695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.752909 containerd[1622]: time="2025-03-17T17:39:43.752887271Z" level=info msg="RemovePodSandbox \"9984047d28e641247bc24e8439455388d5df3f9d1595a92d3400f6dfcd1cedf5\" returns successfully" Mar 17 17:39:43.753549 containerd[1622]: time="2025-03-17T17:39:43.753514434Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" Mar 17 17:39:43.753778 containerd[1622]: time="2025-03-17T17:39:43.753721168Z" level=info msg="TearDown network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" successfully" Mar 17 17:39:43.753778 containerd[1622]: time="2025-03-17T17:39:43.753745890Z" level=info msg="StopPodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" returns successfully" Mar 17 17:39:43.755005 containerd[1622]: time="2025-03-17T17:39:43.754135597Z" level=info msg="RemovePodSandbox for \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" Mar 17 17:39:43.755005 containerd[1622]: time="2025-03-17T17:39:43.754159759Z" level=info msg="Forcibly stopping sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\"" Mar 17 17:39:43.755005 containerd[1622]: time="2025-03-17T17:39:43.754218083Z" level=info msg="TearDown network for sandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" successfully" Mar 17 17:39:43.757911 containerd[1622]: time="2025-03-17T17:39:43.757869615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.758054 containerd[1622]: time="2025-03-17T17:39:43.758035146Z" level=info msg="RemovePodSandbox \"c8b5252eb029ab93a272141f49e9e51a26206507c1f586dff596bdcbc1cc5bad\" returns successfully" Mar 17 17:39:43.758828 containerd[1622]: time="2025-03-17T17:39:43.758804839Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\"" Mar 17 17:39:43.759005 containerd[1622]: time="2025-03-17T17:39:43.758989252Z" level=info msg="TearDown network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" successfully" Mar 17 17:39:43.759068 containerd[1622]: time="2025-03-17T17:39:43.759055977Z" level=info msg="StopPodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" returns successfully" Mar 17 17:39:43.759479 containerd[1622]: time="2025-03-17T17:39:43.759458124Z" level=info msg="RemovePodSandbox for \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\"" Mar 17 17:39:43.759650 containerd[1622]: time="2025-03-17T17:39:43.759625136Z" level=info msg="Forcibly stopping sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\"" Mar 17 17:39:43.759786 containerd[1622]: time="2025-03-17T17:39:43.759770146Z" level=info msg="TearDown network for sandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" successfully" Mar 17 17:39:43.762463 containerd[1622]: time="2025-03-17T17:39:43.762434010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.762629 containerd[1622]: time="2025-03-17T17:39:43.762597061Z" level=info msg="RemovePodSandbox \"97ef330a3cf0e3c6e572a3d402c64cb1c5f8fa2f22c80ae750a59d7adeb347ea\" returns successfully" Mar 17 17:39:43.763319 containerd[1622]: time="2025-03-17T17:39:43.763098776Z" level=info msg="StopPodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\"" Mar 17 17:39:43.763319 containerd[1622]: time="2025-03-17T17:39:43.763187502Z" level=info msg="TearDown network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" successfully" Mar 17 17:39:43.763319 containerd[1622]: time="2025-03-17T17:39:43.763197063Z" level=info msg="StopPodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" returns successfully" Mar 17 17:39:43.763635 containerd[1622]: time="2025-03-17T17:39:43.763613291Z" level=info msg="RemovePodSandbox for \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\"" Mar 17 17:39:43.763685 containerd[1622]: time="2025-03-17T17:39:43.763642253Z" level=info msg="Forcibly stopping sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\"" Mar 17 17:39:43.763724 containerd[1622]: time="2025-03-17T17:39:43.763709778Z" level=info msg="TearDown network for sandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" successfully" Mar 17 17:39:43.767419 containerd[1622]: time="2025-03-17T17:39:43.767376951Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.767499 containerd[1622]: time="2025-03-17T17:39:43.767445036Z" level=info msg="RemovePodSandbox \"d00034dc0b3d8c89e0f49a624a46b2c0ad634bac7e7b2ddceaca6d2627118bf6\" returns successfully" Mar 17 17:39:43.768313 containerd[1622]: time="2025-03-17T17:39:43.768043757Z" level=info msg="StopPodSandbox for \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\"" Mar 17 17:39:43.768313 containerd[1622]: time="2025-03-17T17:39:43.768135284Z" level=info msg="TearDown network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\" successfully" Mar 17 17:39:43.768313 containerd[1622]: time="2025-03-17T17:39:43.768145084Z" level=info msg="StopPodSandbox for \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\" returns successfully" Mar 17 17:39:43.769955 containerd[1622]: time="2025-03-17T17:39:43.768665440Z" level=info msg="RemovePodSandbox for \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\"" Mar 17 17:39:43.769955 containerd[1622]: time="2025-03-17T17:39:43.768694242Z" level=info msg="Forcibly stopping sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\"" Mar 17 17:39:43.769955 containerd[1622]: time="2025-03-17T17:39:43.768782368Z" level=info msg="TearDown network for sandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\" successfully" Mar 17 17:39:43.773230 containerd[1622]: time="2025-03-17T17:39:43.773189993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.773319 containerd[1622]: time="2025-03-17T17:39:43.773255237Z" level=info msg="RemovePodSandbox \"d0b7f4a7e9b264a69e8d963ebb6cc97ba873c309408893c5857f0a90a6d3cfec\" returns successfully" Mar 17 17:39:43.773728 containerd[1622]: time="2025-03-17T17:39:43.773696148Z" level=info msg="StopPodSandbox for \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\"" Mar 17 17:39:43.773871 containerd[1622]: time="2025-03-17T17:39:43.773850198Z" level=info msg="TearDown network for sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\" successfully" Mar 17 17:39:43.773871 containerd[1622]: time="2025-03-17T17:39:43.773868879Z" level=info msg="StopPodSandbox for \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\" returns successfully" Mar 17 17:39:43.775360 containerd[1622]: time="2025-03-17T17:39:43.774145059Z" level=info msg="RemovePodSandbox for \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\"" Mar 17 17:39:43.775360 containerd[1622]: time="2025-03-17T17:39:43.774171980Z" level=info msg="Forcibly stopping sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\"" Mar 17 17:39:43.775360 containerd[1622]: time="2025-03-17T17:39:43.774229344Z" level=info msg="TearDown network for sandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\" successfully" Mar 17 17:39:43.779784 containerd[1622]: time="2025-03-17T17:39:43.779734805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.780444 containerd[1622]: time="2025-03-17T17:39:43.780405731Z" level=info msg="RemovePodSandbox \"7ff87e5658ea4718a5230faecc19356caf06f791a1fd4911a1267031e5e684ca\" returns successfully" Mar 17 17:39:43.780909 containerd[1622]: time="2025-03-17T17:39:43.780885724Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" Mar 17 17:39:43.781083 containerd[1622]: time="2025-03-17T17:39:43.781069537Z" level=info msg="TearDown network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" successfully" Mar 17 17:39:43.781401 containerd[1622]: time="2025-03-17T17:39:43.781384718Z" level=info msg="StopPodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" returns successfully" Mar 17 17:39:43.781870 containerd[1622]: time="2025-03-17T17:39:43.781839510Z" level=info msg="RemovePodSandbox for \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" Mar 17 17:39:43.781930 containerd[1622]: time="2025-03-17T17:39:43.781905074Z" level=info msg="Forcibly stopping sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\"" Mar 17 17:39:43.782005 containerd[1622]: time="2025-03-17T17:39:43.781986480Z" level=info msg="TearDown network for sandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" successfully" Mar 17 17:39:43.785656 containerd[1622]: time="2025-03-17T17:39:43.785596289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.785740 containerd[1622]: time="2025-03-17T17:39:43.785685695Z" level=info msg="RemovePodSandbox \"f99c0214f32254f1e4b6d8439872d9bcab4c08ff123904cb82c6e973dfbc7e2f\" returns successfully" Mar 17 17:39:43.786101 containerd[1622]: time="2025-03-17T17:39:43.786075602Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\"" Mar 17 17:39:43.786519 containerd[1622]: time="2025-03-17T17:39:43.786493111Z" level=info msg="TearDown network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" successfully" Mar 17 17:39:43.786637 containerd[1622]: time="2025-03-17T17:39:43.786618360Z" level=info msg="StopPodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" returns successfully" Mar 17 17:39:43.787406 containerd[1622]: time="2025-03-17T17:39:43.787215961Z" level=info msg="RemovePodSandbox for \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\"" Mar 17 17:39:43.788380 containerd[1622]: time="2025-03-17T17:39:43.787638110Z" level=info msg="Forcibly stopping sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\"" Mar 17 17:39:43.788380 containerd[1622]: time="2025-03-17T17:39:43.787737437Z" level=info msg="TearDown network for sandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" successfully" Mar 17 17:39:43.792648 containerd[1622]: time="2025-03-17T17:39:43.792551050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.792648 containerd[1622]: time="2025-03-17T17:39:43.792617774Z" level=info msg="RemovePodSandbox \"e5c476207371c31922d91cbf514d77f9b15480e6611523bbc9b25716d1a7d231\" returns successfully" Mar 17 17:39:43.793218 containerd[1622]: time="2025-03-17T17:39:43.793190254Z" level=info msg="StopPodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\"" Mar 17 17:39:43.793797 containerd[1622]: time="2025-03-17T17:39:43.793776414Z" level=info msg="TearDown network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" successfully" Mar 17 17:39:43.794036 containerd[1622]: time="2025-03-17T17:39:43.793888342Z" level=info msg="StopPodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" returns successfully" Mar 17 17:39:43.794673 containerd[1622]: time="2025-03-17T17:39:43.794305091Z" level=info msg="RemovePodSandbox for \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\"" Mar 17 17:39:43.794942 containerd[1622]: time="2025-03-17T17:39:43.794915773Z" level=info msg="Forcibly stopping sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\"" Mar 17 17:39:43.795469 containerd[1622]: time="2025-03-17T17:39:43.795076224Z" level=info msg="TearDown network for sandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" successfully" Mar 17 17:39:43.798254 containerd[1622]: time="2025-03-17T17:39:43.798132235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.798254 containerd[1622]: time="2025-03-17T17:39:43.798194199Z" level=info msg="RemovePodSandbox \"67420eb2bcee611ccb5de8fd0908fb4044616f286ac4383b07eb1b29b84cb4b0\" returns successfully" Mar 17 17:39:43.799176 containerd[1622]: time="2025-03-17T17:39:43.798805841Z" level=info msg="StopPodSandbox for \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\"" Mar 17 17:39:43.799176 containerd[1622]: time="2025-03-17T17:39:43.798899208Z" level=info msg="TearDown network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\" successfully" Mar 17 17:39:43.799176 containerd[1622]: time="2025-03-17T17:39:43.798910049Z" level=info msg="StopPodSandbox for \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\" returns successfully" Mar 17 17:39:43.799709 containerd[1622]: time="2025-03-17T17:39:43.799564734Z" level=info msg="RemovePodSandbox for \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\"" Mar 17 17:39:43.799709 containerd[1622]: time="2025-03-17T17:39:43.799593536Z" level=info msg="Forcibly stopping sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\"" Mar 17 17:39:43.799709 containerd[1622]: time="2025-03-17T17:39:43.799663741Z" level=info msg="TearDown network for sandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\" successfully" Mar 17 17:39:43.804065 containerd[1622]: time="2025-03-17T17:39:43.803929515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.804065 containerd[1622]: time="2025-03-17T17:39:43.803998160Z" level=info msg="RemovePodSandbox \"fa28a56aa566b6cc3b377df4c0f542a82e7251867b1ce992a022a43e5ff34166\" returns successfully" Mar 17 17:39:43.804973 containerd[1622]: time="2025-03-17T17:39:43.804712769Z" level=info msg="StopPodSandbox for \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\"" Mar 17 17:39:43.805476 containerd[1622]: time="2025-03-17T17:39:43.805114997Z" level=info msg="TearDown network for sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\" successfully" Mar 17 17:39:43.805476 containerd[1622]: time="2025-03-17T17:39:43.805138959Z" level=info msg="StopPodSandbox for \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\" returns successfully" Mar 17 17:39:43.806329 containerd[1622]: time="2025-03-17T17:39:43.805723679Z" level=info msg="RemovePodSandbox for \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\"" Mar 17 17:39:43.806329 containerd[1622]: time="2025-03-17T17:39:43.805790324Z" level=info msg="Forcibly stopping sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\"" Mar 17 17:39:43.806329 containerd[1622]: time="2025-03-17T17:39:43.805896051Z" level=info msg="TearDown network for sandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\" successfully" Mar 17 17:39:43.810647 containerd[1622]: time="2025-03-17T17:39:43.810317516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.810647 containerd[1622]: time="2025-03-17T17:39:43.810393282Z" level=info msg="RemovePodSandbox \"96ca9371daa82d17254d5f9d50de8c23d4dcfcccf023cefa4cb53f46f3296e2d\" returns successfully" Mar 17 17:39:43.810870 containerd[1622]: time="2025-03-17T17:39:43.810793109Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" Mar 17 17:39:43.810943 containerd[1622]: time="2025-03-17T17:39:43.810911717Z" level=info msg="TearDown network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" successfully" Mar 17 17:39:43.810943 containerd[1622]: time="2025-03-17T17:39:43.810928319Z" level=info msg="StopPodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" returns successfully" Mar 17 17:39:43.812055 containerd[1622]: time="2025-03-17T17:39:43.811239380Z" level=info msg="RemovePodSandbox for \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" Mar 17 17:39:43.812055 containerd[1622]: time="2025-03-17T17:39:43.811263422Z" level=info msg="Forcibly stopping sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\"" Mar 17 17:39:43.812055 containerd[1622]: time="2025-03-17T17:39:43.811315985Z" level=info msg="TearDown network for sandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" successfully" Mar 17 17:39:43.814864 containerd[1622]: time="2025-03-17T17:39:43.814824788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.815019 containerd[1622]: time="2025-03-17T17:39:43.815001440Z" level=info msg="RemovePodSandbox \"fcd2d40735dd16179f13336c2ba24ebb96d46ed3b71992af3cd0582396410a88\" returns successfully" Mar 17 17:39:43.815570 containerd[1622]: time="2025-03-17T17:39:43.815539277Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\"" Mar 17 17:39:43.815677 containerd[1622]: time="2025-03-17T17:39:43.815660285Z" level=info msg="TearDown network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" successfully" Mar 17 17:39:43.815746 containerd[1622]: time="2025-03-17T17:39:43.815677526Z" level=info msg="StopPodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" returns successfully" Mar 17 17:39:43.816642 containerd[1622]: time="2025-03-17T17:39:43.816135718Z" level=info msg="RemovePodSandbox for \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\"" Mar 17 17:39:43.816642 containerd[1622]: time="2025-03-17T17:39:43.816160880Z" level=info msg="Forcibly stopping sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\"" Mar 17 17:39:43.816642 containerd[1622]: time="2025-03-17T17:39:43.816223924Z" level=info msg="TearDown network for sandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" successfully" Mar 17 17:39:43.819604 containerd[1622]: time="2025-03-17T17:39:43.819571115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.820077 containerd[1622]: time="2025-03-17T17:39:43.819797971Z" level=info msg="RemovePodSandbox \"aa36dcf60f96644f628b648bde06175cc0c6afe3300bd1cf27ec6c002df114b0\" returns successfully" Mar 17 17:39:43.820249 containerd[1622]: time="2025-03-17T17:39:43.820214160Z" level=info msg="StopPodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\"" Mar 17 17:39:43.820369 containerd[1622]: time="2025-03-17T17:39:43.820334248Z" level=info msg="TearDown network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" successfully" Mar 17 17:39:43.820410 containerd[1622]: time="2025-03-17T17:39:43.820370491Z" level=info msg="StopPodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" returns successfully" Mar 17 17:39:43.822086 containerd[1622]: time="2025-03-17T17:39:43.820707354Z" level=info msg="RemovePodSandbox for \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\"" Mar 17 17:39:43.822086 containerd[1622]: time="2025-03-17T17:39:43.820730115Z" level=info msg="Forcibly stopping sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\"" Mar 17 17:39:43.822086 containerd[1622]: time="2025-03-17T17:39:43.820804601Z" level=info msg="TearDown network for sandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" successfully" Mar 17 17:39:43.824156 containerd[1622]: time="2025-03-17T17:39:43.824004701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.824156 containerd[1622]: time="2025-03-17T17:39:43.824073746Z" level=info msg="RemovePodSandbox \"3b1ec915bfa2cdd945b055eedb01d7a83209b69fc70878e7078e5b14d56672e1\" returns successfully" Mar 17 17:39:43.824594 containerd[1622]: time="2025-03-17T17:39:43.824571861Z" level=info msg="StopPodSandbox for \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\"" Mar 17 17:39:43.824862 containerd[1622]: time="2025-03-17T17:39:43.824823438Z" level=info msg="TearDown network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\" successfully" Mar 17 17:39:43.824862 containerd[1622]: time="2025-03-17T17:39:43.824846240Z" level=info msg="StopPodSandbox for \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\" returns successfully" Mar 17 17:39:43.825217 containerd[1622]: time="2025-03-17T17:39:43.825193504Z" level=info msg="RemovePodSandbox for \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\"" Mar 17 17:39:43.825257 containerd[1622]: time="2025-03-17T17:39:43.825225466Z" level=info msg="Forcibly stopping sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\"" Mar 17 17:39:43.825408 containerd[1622]: time="2025-03-17T17:39:43.825391397Z" level=info msg="TearDown network for sandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\" successfully" Mar 17 17:39:43.829076 containerd[1622]: time="2025-03-17T17:39:43.829020448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.829154 containerd[1622]: time="2025-03-17T17:39:43.829088173Z" level=info msg="RemovePodSandbox \"d61299a9a4422f1de1d53d3f24a9a00ed17c97cd8075070e69188e4dabf27991\" returns successfully" Mar 17 17:39:43.829636 containerd[1622]: time="2025-03-17T17:39:43.829577166Z" level=info msg="StopPodSandbox for \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\"" Mar 17 17:39:43.829703 containerd[1622]: time="2025-03-17T17:39:43.829672733Z" level=info msg="TearDown network for sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\" successfully" Mar 17 17:39:43.829703 containerd[1622]: time="2025-03-17T17:39:43.829684174Z" level=info msg="StopPodSandbox for \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\" returns successfully" Mar 17 17:39:43.830451 containerd[1622]: time="2025-03-17T17:39:43.830063360Z" level=info msg="RemovePodSandbox for \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\"" Mar 17 17:39:43.830451 containerd[1622]: time="2025-03-17T17:39:43.830091482Z" level=info msg="Forcibly stopping sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\"" Mar 17 17:39:43.830451 containerd[1622]: time="2025-03-17T17:39:43.830165407Z" level=info msg="TearDown network for sandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\" successfully" Mar 17 17:39:43.834471 containerd[1622]: time="2025-03-17T17:39:43.834141521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.834471 containerd[1622]: time="2025-03-17T17:39:43.834213486Z" level=info msg="RemovePodSandbox \"8172236dfb9940f74e3cb0c0665cc689fbe7c2a7363437f64cd66872aed4ff16\" returns successfully" Mar 17 17:39:43.834881 containerd[1622]: time="2025-03-17T17:39:43.834820768Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" Mar 17 17:39:43.835044 containerd[1622]: time="2025-03-17T17:39:43.835010701Z" level=info msg="TearDown network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" successfully" Mar 17 17:39:43.835074 containerd[1622]: time="2025-03-17T17:39:43.835040744Z" level=info msg="StopPodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" returns successfully" Mar 17 17:39:43.835925 containerd[1622]: time="2025-03-17T17:39:43.835888242Z" level=info msg="RemovePodSandbox for \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" Mar 17 17:39:43.836001 containerd[1622]: time="2025-03-17T17:39:43.835944006Z" level=info msg="Forcibly stopping sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\"" Mar 17 17:39:43.836110 containerd[1622]: time="2025-03-17T17:39:43.836073175Z" level=info msg="TearDown network for sandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" successfully" Mar 17 17:39:43.840776 containerd[1622]: time="2025-03-17T17:39:43.840688134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.840776 containerd[1622]: time="2025-03-17T17:39:43.840769979Z" level=info msg="RemovePodSandbox \"2d1e0d887ab4a69fe6b3607e546b70008e8952414168fd596fe9a76ef164c935\" returns successfully" Mar 17 17:39:43.841453 containerd[1622]: time="2025-03-17T17:39:43.841315857Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\"" Mar 17 17:39:43.841665 containerd[1622]: time="2025-03-17T17:39:43.841461467Z" level=info msg="TearDown network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" successfully" Mar 17 17:39:43.841665 containerd[1622]: time="2025-03-17T17:39:43.841474188Z" level=info msg="StopPodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" returns successfully" Mar 17 17:39:43.842393 containerd[1622]: time="2025-03-17T17:39:43.842202958Z" level=info msg="RemovePodSandbox for \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\"" Mar 17 17:39:43.842393 containerd[1622]: time="2025-03-17T17:39:43.842239201Z" level=info msg="Forcibly stopping sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\"" Mar 17 17:39:43.842393 containerd[1622]: time="2025-03-17T17:39:43.842330527Z" level=info msg="TearDown network for sandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" successfully" Mar 17 17:39:43.846386 containerd[1622]: time="2025-03-17T17:39:43.846314082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.846478 containerd[1622]: time="2025-03-17T17:39:43.846401168Z" level=info msg="RemovePodSandbox \"69b2e72e14ef436d7b10551f5a65957528b890ac2018743cffca26c2923f5f28\" returns successfully" Mar 17 17:39:43.846977 containerd[1622]: time="2025-03-17T17:39:43.846952246Z" level=info msg="StopPodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\"" Mar 17 17:39:43.847060 containerd[1622]: time="2025-03-17T17:39:43.847046053Z" level=info msg="TearDown network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" successfully" Mar 17 17:39:43.847094 containerd[1622]: time="2025-03-17T17:39:43.847058053Z" level=info msg="StopPodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" returns successfully" Mar 17 17:39:43.847820 containerd[1622]: time="2025-03-17T17:39:43.847428279Z" level=info msg="RemovePodSandbox for \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\"" Mar 17 17:39:43.847820 containerd[1622]: time="2025-03-17T17:39:43.847453321Z" level=info msg="Forcibly stopping sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\"" Mar 17 17:39:43.847820 containerd[1622]: time="2025-03-17T17:39:43.847522765Z" level=info msg="TearDown network for sandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" successfully" Mar 17 17:39:43.850618 containerd[1622]: time="2025-03-17T17:39:43.850581617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.850704 containerd[1622]: time="2025-03-17T17:39:43.850644701Z" level=info msg="RemovePodSandbox \"9e4cd7d542c36abb244b44b959f5e22e654f1839f1797e7e311442d2d3cd4f58\" returns successfully" Mar 17 17:39:43.851165 containerd[1622]: time="2025-03-17T17:39:43.850995005Z" level=info msg="StopPodSandbox for \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\"" Mar 17 17:39:43.851165 containerd[1622]: time="2025-03-17T17:39:43.851092372Z" level=info msg="TearDown network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\" successfully" Mar 17 17:39:43.851165 containerd[1622]: time="2025-03-17T17:39:43.851102173Z" level=info msg="StopPodSandbox for \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\" returns successfully" Mar 17 17:39:43.851584 containerd[1622]: time="2025-03-17T17:39:43.851506041Z" level=info msg="RemovePodSandbox for \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\"" Mar 17 17:39:43.851584 containerd[1622]: time="2025-03-17T17:39:43.851538883Z" level=info msg="Forcibly stopping sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\"" Mar 17 17:39:43.851730 containerd[1622]: time="2025-03-17T17:39:43.851602247Z" level=info msg="TearDown network for sandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\" successfully" Mar 17 17:39:43.854699 containerd[1622]: time="2025-03-17T17:39:43.854625016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.854699 containerd[1622]: time="2025-03-17T17:39:43.854693541Z" level=info msg="RemovePodSandbox \"66cb341207d2868e40a116f95f505345959475ef8df6bb6f65820c3169eb4270\" returns successfully" Mar 17 17:39:43.855769 containerd[1622]: time="2025-03-17T17:39:43.855377308Z" level=info msg="StopPodSandbox for \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\"" Mar 17 17:39:43.855769 containerd[1622]: time="2025-03-17T17:39:43.855486715Z" level=info msg="TearDown network for sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\" successfully" Mar 17 17:39:43.855769 containerd[1622]: time="2025-03-17T17:39:43.855499956Z" level=info msg="StopPodSandbox for \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\" returns successfully" Mar 17 17:39:43.856488 containerd[1622]: time="2025-03-17T17:39:43.856287771Z" level=info msg="RemovePodSandbox for \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\"" Mar 17 17:39:43.856488 containerd[1622]: time="2025-03-17T17:39:43.856319773Z" level=info msg="Forcibly stopping sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\"" Mar 17 17:39:43.856488 containerd[1622]: time="2025-03-17T17:39:43.856435461Z" level=info msg="TearDown network for sandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\" successfully" Mar 17 17:39:43.860501 containerd[1622]: time="2025-03-17T17:39:43.860467379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.860663 containerd[1622]: time="2025-03-17T17:39:43.860645272Z" level=info msg="RemovePodSandbox \"93941444b8a53cdf3a5d63ff19ba9ec015c857c58144d0e6242ec8073955a726\" returns successfully" Mar 17 17:39:43.861379 containerd[1622]: time="2025-03-17T17:39:43.861286876Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" Mar 17 17:39:43.861623 containerd[1622]: time="2025-03-17T17:39:43.861593457Z" level=info msg="TearDown network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" successfully" Mar 17 17:39:43.861728 containerd[1622]: time="2025-03-17T17:39:43.861711225Z" level=info msg="StopPodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" returns successfully" Mar 17 17:39:43.862151 containerd[1622]: time="2025-03-17T17:39:43.862130254Z" level=info msg="RemovePodSandbox for \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" Mar 17 17:39:43.862242 containerd[1622]: time="2025-03-17T17:39:43.862229421Z" level=info msg="Forcibly stopping sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\"" Mar 17 17:39:43.862409 containerd[1622]: time="2025-03-17T17:39:43.862394632Z" level=info msg="TearDown network for sandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" successfully" Mar 17 17:39:43.865935 containerd[1622]: time="2025-03-17T17:39:43.865743304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.865935 containerd[1622]: time="2025-03-17T17:39:43.865832550Z" level=info msg="RemovePodSandbox \"1a3c1e5ece434de3061eddf41e62c5b2e2770d648a6cfd497a2bf7ee1b92591e\" returns successfully" Mar 17 17:39:43.866522 containerd[1622]: time="2025-03-17T17:39:43.866496916Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\"" Mar 17 17:39:43.866611 containerd[1622]: time="2025-03-17T17:39:43.866594442Z" level=info msg="TearDown network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" successfully" Mar 17 17:39:43.866648 containerd[1622]: time="2025-03-17T17:39:43.866610444Z" level=info msg="StopPodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" returns successfully" Mar 17 17:39:43.867366 containerd[1622]: time="2025-03-17T17:39:43.867030993Z" level=info msg="RemovePodSandbox for \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\"" Mar 17 17:39:43.867366 containerd[1622]: time="2025-03-17T17:39:43.867057274Z" level=info msg="Forcibly stopping sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\"" Mar 17 17:39:43.867366 containerd[1622]: time="2025-03-17T17:39:43.867128959Z" level=info msg="TearDown network for sandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" successfully" Mar 17 17:39:43.870161 containerd[1622]: time="2025-03-17T17:39:43.870131807Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.870364 containerd[1622]: time="2025-03-17T17:39:43.870270536Z" level=info msg="RemovePodSandbox \"06318a3ee0b80ee774498be76d5d926c28096bbd856ef4f8150bceffe56b7a70\" returns successfully" Mar 17 17:39:43.871004 containerd[1622]: time="2025-03-17T17:39:43.870675884Z" level=info msg="StopPodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\"" Mar 17 17:39:43.871004 containerd[1622]: time="2025-03-17T17:39:43.870853817Z" level=info msg="TearDown network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" successfully" Mar 17 17:39:43.871004 containerd[1622]: time="2025-03-17T17:39:43.870867738Z" level=info msg="StopPodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" returns successfully" Mar 17 17:39:43.871248 containerd[1622]: time="2025-03-17T17:39:43.871219042Z" level=info msg="RemovePodSandbox for \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\"" Mar 17 17:39:43.871295 containerd[1622]: time="2025-03-17T17:39:43.871259245Z" level=info msg="Forcibly stopping sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\"" Mar 17 17:39:43.871394 containerd[1622]: time="2025-03-17T17:39:43.871374693Z" level=info msg="TearDown network for sandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" successfully" Mar 17 17:39:43.875174 containerd[1622]: time="2025-03-17T17:39:43.875139793Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.875245 containerd[1622]: time="2025-03-17T17:39:43.875204277Z" level=info msg="RemovePodSandbox \"7db76fc51ad3a900e10a0528dfa7316a504921fabbd370153e59fad0d0d2243b\" returns successfully" Mar 17 17:39:43.875740 containerd[1622]: time="2025-03-17T17:39:43.875564622Z" level=info msg="StopPodSandbox for \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\"" Mar 17 17:39:43.875740 containerd[1622]: time="2025-03-17T17:39:43.875642827Z" level=info msg="TearDown network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\" successfully" Mar 17 17:39:43.875740 containerd[1622]: time="2025-03-17T17:39:43.875652068Z" level=info msg="StopPodSandbox for \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\" returns successfully" Mar 17 17:39:43.876540 containerd[1622]: time="2025-03-17T17:39:43.876398919Z" level=info msg="RemovePodSandbox for \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\"" Mar 17 17:39:43.876540 containerd[1622]: time="2025-03-17T17:39:43.876426721Z" level=info msg="Forcibly stopping sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\"" Mar 17 17:39:43.876540 containerd[1622]: time="2025-03-17T17:39:43.876501887Z" level=info msg="TearDown network for sandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\" successfully" Mar 17 17:39:43.879727 containerd[1622]: time="2025-03-17T17:39:43.879589700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.879727 containerd[1622]: time="2025-03-17T17:39:43.879649984Z" level=info msg="RemovePodSandbox \"9b3a93177b6cba2c68547b57e74bd90246e4634161f97b128c565c552c4bc14c\" returns successfully" Mar 17 17:39:43.880364 containerd[1622]: time="2025-03-17T17:39:43.880183461Z" level=info msg="StopPodSandbox for \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\"" Mar 17 17:39:43.880364 containerd[1622]: time="2025-03-17T17:39:43.880268987Z" level=info msg="TearDown network for sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\" successfully" Mar 17 17:39:43.880364 containerd[1622]: time="2025-03-17T17:39:43.880278267Z" level=info msg="StopPodSandbox for \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\" returns successfully" Mar 17 17:39:43.880628 containerd[1622]: time="2025-03-17T17:39:43.880597289Z" level=info msg="RemovePodSandbox for \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\"" Mar 17 17:39:43.880671 containerd[1622]: time="2025-03-17T17:39:43.880633572Z" level=info msg="Forcibly stopping sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\"" Mar 17 17:39:43.880744 containerd[1622]: time="2025-03-17T17:39:43.880713937Z" level=info msg="TearDown network for sandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\" successfully" Mar 17 17:39:43.884402 containerd[1622]: time="2025-03-17T17:39:43.884352189Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:43.884468 containerd[1622]: time="2025-03-17T17:39:43.884422994Z" level=info msg="RemovePodSandbox \"6e0433b6ac3463acd6e86d047a92827d52d3752a4b4e568f94103cdde6bb9f19\" returns successfully" Mar 17 17:40:14.302188 kubelet[3064]: I0317 17:40:14.301667 3064 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:40:21.558226 systemd[1]: run-containerd-runc-k8s.io-b6b4d4870ca888b538582edde1a1adbce7084bf5537d09405df34c7aa67ae0a1-runc.AJNsLF.mount: Deactivated successfully. Mar 17 17:43:34.958656 systemd[1]: Started sshd@7-138.201.116.42:22-139.178.89.65:46558.service - OpenSSH per-connection server daemon (139.178.89.65:46558). Mar 17 17:43:35.966015 sshd[6495]: Accepted publickey for core from 139.178.89.65 port 46558 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:35.968981 sshd-session[6495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:35.977269 systemd-logind[1609]: New session 8 of user core. Mar 17 17:43:35.983888 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:43:36.750706 sshd[6498]: Connection closed by 139.178.89.65 port 46558 Mar 17 17:43:36.750321 sshd-session[6495]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:36.754541 systemd[1]: sshd@7-138.201.116.42:22-139.178.89.65:46558.service: Deactivated successfully. Mar 17 17:43:36.760027 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:43:36.760690 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:43:36.761755 systemd-logind[1609]: Removed session 8. Mar 17 17:43:41.917764 systemd[1]: Started sshd@8-138.201.116.42:22-139.178.89.65:40216.service - OpenSSH per-connection server daemon (139.178.89.65:40216). Mar 17 17:43:42.900933 sshd[6511]: Accepted publickey for core from 139.178.89.65 port 40216 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:42.903098 sshd-session[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:42.908096 systemd-logind[1609]: New session 9 of user core. Mar 17 17:43:42.913710 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:43:43.658022 sshd[6514]: Connection closed by 139.178.89.65 port 40216 Mar 17 17:43:43.659207 sshd-session[6511]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:43.664497 systemd[1]: sshd@8-138.201.116.42:22-139.178.89.65:40216.service: Deactivated successfully. Mar 17 17:43:43.668929 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:43:43.670190 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:43:43.671510 systemd-logind[1609]: Removed session 9. Mar 17 17:43:48.830795 systemd[1]: Started sshd@9-138.201.116.42:22-139.178.89.65:40222.service - OpenSSH per-connection server daemon (139.178.89.65:40222). Mar 17 17:43:49.831948 sshd[6549]: Accepted publickey for core from 139.178.89.65 port 40222 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:49.832733 sshd-session[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:49.837949 systemd-logind[1609]: New session 10 of user core. Mar 17 17:43:49.846183 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:43:50.602611 sshd[6552]: Connection closed by 139.178.89.65 port 40222 Mar 17 17:43:50.603045 sshd-session[6549]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:50.610292 systemd-logind[1609]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:43:50.611713 systemd[1]: sshd@9-138.201.116.42:22-139.178.89.65:40222.service: Deactivated successfully. Mar 17 17:43:50.617678 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:43:50.620518 systemd-logind[1609]: Removed session 10. Mar 17 17:43:50.771773 systemd[1]: Started sshd@10-138.201.116.42:22-139.178.89.65:40236.service - OpenSSH per-connection server daemon (139.178.89.65:40236). Mar 17 17:43:51.757085 sshd[6564]: Accepted publickey for core from 139.178.89.65 port 40236 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:51.760003 sshd-session[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:51.765377 systemd-logind[1609]: New session 11 of user core. Mar 17 17:43:51.770884 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:43:52.552318 sshd[6588]: Connection closed by 139.178.89.65 port 40236 Mar 17 17:43:52.550692 sshd-session[6564]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:52.555183 systemd[1]: sshd@10-138.201.116.42:22-139.178.89.65:40236.service: Deactivated successfully. Mar 17 17:43:52.561228 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:43:52.565625 systemd-logind[1609]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:43:52.567434 systemd-logind[1609]: Removed session 11. Mar 17 17:43:52.720689 systemd[1]: Started sshd@11-138.201.116.42:22-139.178.89.65:59580.service - OpenSSH per-connection server daemon (139.178.89.65:59580). Mar 17 17:43:53.706997 sshd[6597]: Accepted publickey for core from 139.178.89.65 port 59580 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:43:53.709764 sshd-session[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:53.715572 systemd-logind[1609]: New session 12 of user core. Mar 17 17:43:53.723021 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:43:54.463502 sshd[6600]: Connection closed by 139.178.89.65 port 59580 Mar 17 17:43:54.464334 sshd-session[6597]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:54.474100 systemd[1]: sshd@11-138.201.116.42:22-139.178.89.65:59580.service: Deactivated successfully. Mar 17 17:43:54.475404 systemd-logind[1609]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:43:54.477661 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:43:54.478809 systemd-logind[1609]: Removed session 12. Mar 17 17:43:59.628748 systemd[1]: Started sshd@12-138.201.116.42:22-139.178.89.65:59592.service - OpenSSH per-connection server daemon (139.178.89.65:59592). Mar 17 17:44:00.629522 sshd[6617]: Accepted publickey for core from 139.178.89.65 port 59592 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:44:00.631859 sshd-session[6617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:00.638060 systemd-logind[1609]: New session 13 of user core. Mar 17 17:44:00.645719 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:44:01.396757 sshd[6620]: Connection closed by 139.178.89.65 port 59592 Mar 17 17:44:01.397735 sshd-session[6617]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:01.403890 systemd-logind[1609]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:44:01.404056 systemd[1]: sshd@12-138.201.116.42:22-139.178.89.65:59592.service: Deactivated successfully. Mar 17 17:44:01.410050 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:44:01.412827 systemd-logind[1609]: Removed session 13. Mar 17 17:44:01.564826 systemd[1]: Started sshd@13-138.201.116.42:22-139.178.89.65:48200.service - OpenSSH per-connection server daemon (139.178.89.65:48200). Mar 17 17:44:02.558315 sshd[6631]: Accepted publickey for core from 139.178.89.65 port 48200 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:44:02.560489 sshd-session[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:02.569931 systemd-logind[1609]: New session 14 of user core. Mar 17 17:44:02.574945 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:44:03.449030 sshd[6634]: Connection closed by 139.178.89.65 port 48200 Mar 17 17:44:03.449952 sshd-session[6631]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:03.454648 systemd[1]: sshd@13-138.201.116.42:22-139.178.89.65:48200.service: Deactivated successfully. Mar 17 17:44:03.460071 systemd-logind[1609]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:44:03.460977 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:44:03.462822 systemd-logind[1609]: Removed session 14. Mar 17 17:44:03.621951 systemd[1]: Started sshd@14-138.201.116.42:22-139.178.89.65:48208.service - OpenSSH per-connection server daemon (139.178.89.65:48208). Mar 17 17:44:04.617679 sshd[6643]: Accepted publickey for core from 139.178.89.65 port 48208 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:44:04.621217 sshd-session[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:04.634212 systemd-logind[1609]: New session 15 of user core. Mar 17 17:44:04.638704 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:44:07.256724 sshd[6646]: Connection closed by 139.178.89.65 port 48208 Mar 17 17:44:07.258207 sshd-session[6643]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:07.262562 systemd[1]: sshd@14-138.201.116.42:22-139.178.89.65:48208.service: Deactivated successfully. Mar 17 17:44:07.267821 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:44:07.268913 systemd-logind[1609]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:44:07.270860 systemd-logind[1609]: Removed session 15. Mar 17 17:44:07.424858 systemd[1]: Started sshd@15-138.201.116.42:22-139.178.89.65:48218.service - OpenSSH per-connection server daemon (139.178.89.65:48218). Mar 17 17:44:08.416780 sshd[6662]: Accepted publickey for core from 139.178.89.65 port 48218 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:44:08.418999 sshd-session[6662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:08.424305 systemd-logind[1609]: New session 16 of user core. Mar 17 17:44:08.429944 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:44:09.303694 sshd[6665]: Connection closed by 139.178.89.65 port 48218 Mar 17 17:44:09.305100 sshd-session[6662]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:09.312625 systemd-logind[1609]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:44:09.313625 systemd[1]: sshd@15-138.201.116.42:22-139.178.89.65:48218.service: Deactivated successfully. Mar 17 17:44:09.316560 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:44:09.319747 systemd-logind[1609]: Removed session 16. Mar 17 17:44:09.470730 systemd[1]: Started sshd@16-138.201.116.42:22-139.178.89.65:48226.service - OpenSSH per-connection server daemon (139.178.89.65:48226). Mar 17 17:44:10.454996 sshd[6673]: Accepted publickey for core from 139.178.89.65 port 48226 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:44:10.456885 sshd-session[6673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:10.462305 systemd-logind[1609]: New session 17 of user core. Mar 17 17:44:10.465755 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:44:11.202967 sshd[6676]: Connection closed by 139.178.89.65 port 48226 Mar 17 17:44:11.202782 sshd-session[6673]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:11.208876 systemd-logind[1609]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:44:11.209866 systemd[1]: sshd@16-138.201.116.42:22-139.178.89.65:48226.service: Deactivated successfully. Mar 17 17:44:11.214898 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:44:11.216610 systemd-logind[1609]: Removed session 17. Mar 17 17:44:16.372667 systemd[1]: Started sshd@17-138.201.116.42:22-139.178.89.65:55126.service - OpenSSH per-connection server daemon (139.178.89.65:55126). Mar 17 17:44:17.365459 sshd[6727]: Accepted publickey for core from 139.178.89.65 port 55126 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:44:17.368397 sshd-session[6727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:17.374692 systemd-logind[1609]: New session 18 of user core. Mar 17 17:44:17.379782 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:44:18.119063 sshd[6730]: Connection closed by 139.178.89.65 port 55126 Mar 17 17:44:18.118884 sshd-session[6727]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:18.125034 systemd[1]: sshd@17-138.201.116.42:22-139.178.89.65:55126.service: Deactivated successfully. Mar 17 17:44:18.128876 systemd-logind[1609]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:44:18.129573 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:44:18.130749 systemd-logind[1609]: Removed session 18. Mar 17 17:44:23.285699 systemd[1]: Started sshd@18-138.201.116.42:22-139.178.89.65:59542.service - OpenSSH per-connection server daemon (139.178.89.65:59542). Mar 17 17:44:24.270910 sshd[6768]: Accepted publickey for core from 139.178.89.65 port 59542 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:44:24.272922 sshd-session[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:24.277952 systemd-logind[1609]: New session 19 of user core. Mar 17 17:44:24.282624 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:44:25.019868 sshd[6783]: Connection closed by 139.178.89.65 port 59542 Mar 17 17:44:25.021535 sshd-session[6768]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:25.026095 systemd[1]: sshd@18-138.201.116.42:22-139.178.89.65:59542.service: Deactivated successfully. Mar 17 17:44:25.031286 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:44:25.032843 systemd-logind[1609]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:44:25.033853 systemd-logind[1609]: Removed session 19.